Posts Docker vs. Fedora. Docker loses.
Post
Cancel

Docker vs. Fedora. Docker loses.

My word, posts are starting to pile up before I’ve had a chance to finish them. But I digress.

The state of Docker in Fedora is… questionable. The same goes for RHEL and other brother and sister distros. The issue I ran into is listed as the last comment on this here page, with the gist of it being:

1
2
3
Error: COMMAND_FAILED: 'python-nftables' failed: 
JSON blob:
{"nftables": [{"metainfo": {"json_schema_version": 1}}, {"add": {"table": {"family": "inet", "name": "firewalld_policy_drop"}}}, {"add": {"chain": {"family": "inet", "table": "firewalld_policy_drop", "name": "filter_input", "type": "filter", "hook": "input", "prio": 9, "policy": "drop"}}}, {"add": {"chain": {"family": "inet", "table": "firewalld_policy_drop", "name": "filter_forward", "type": "filter", "hook": "forward", "prio": 9, "policy": "drop"}}}, {"add": {"chain": {"family": "inet", "table": "firewalld_policy_drop", "name": "filter_output", "type": "filter", "hook": "output", "prio": 9, "policy": "drop"}}}, {"add": {"rule": {"family": "inet", "table": "firewalld_policy_drop", "chain": "filter_input", "expr": [{"match": {"left": {"ct": {"key": "state"}}, "op": "in", "right": {"set": ["established", "related"]}}}, {"accept": null}]}}}, {"add": {"rule": {"family": "inet", "table": "firewalld_policy_drop", "chain": "filter_forward", "expr": [{"match": {"left": {"ct": {"key": "state"}}, "op": "in", "right": {"set": ["established", "related"]}}}, {"accept": null}]}}}, {"add": {"rule": {"family": "inet", "table": "firewalld_policy_drop", "chain": "filter_output", "expr": [{"match": {"left": {"ct": {"key": "state"}}, "op": "in", "right": {"set": ["established", "related"]}}}, {"accept": null}]}}}]}

The infuriating thing is that I can’t remember exactly what I did to cause this issue. On some level, I’m trying to get Bacula running on a client VM, as I’ve successfully set up the Bacula server. I was having issues connecting to it from the server, so I went added an exception through firewall-cmd, and upon running it with --reload, that lovely little trinket raised its head above the ground.

Then I went and tried the workaround listed in comment two of this page. That didn’t help, so I put it back. I was close to becoming too frustrated with the disarray between Docker and RHEL that I did start to consider alternatives. Then a comment on this page (almost lost that very valuable link) pushed me over the edge:

[…] Docker + CentOS/RHEL in its default configuration is a very bad idea right now […] A lot of things suggest that el7 is a pretty good platform for a host that is supposed to do nothing apart from docker […] It is not a good platform.

SAKUJ0 (GitHub)

Yup. Bye-bye, Docker, and all of this after I’ve successfully set up so many containers including Filebeat. I didn’t switch over to Podman when I first started up my Docker host (which was probably in the order of weeks ago, but things move fast when you’re me). However, when I heard that it didn’t support docker-compose as a result of not supporting the Docker API, I thought “nope”. This, though. This is much worse than not being able to use docker-compose. Plus, it means I get to learn how to use a new product. Yay (with no sarcasm).

What Happens to docker-compose?

I currently manually run docker-compose from CLI to deploy my containers (some of which I wrote and some from the interwebs). I do, however, build my containers that I write using a private GitLab server, so instead of having to do all the manual things, my plan is to trigger a deployment after building.

And For Those I Don’t Build?

…that I’m not sure about. Historically it has been nice to have a docker-compose.yml file sitting on my server which spins up all related containers that I haven’t built. I’m sure there’s a nice solution. Podman Pods are an option, albeit not a lift-and-shift, but that’s okay. Learning time!

For reference, I will be using this post as a starting point. May it serve you well :)

Update: 0347 hrs, the next morning

I seem to have fallen into the trap of not looking at the time… whoops. It turns out that podman and the whole “don’t run as root” spiel isn’t as amazing as everyone makes it out to be. There is quite a good reason why there is little documentation on it online: it’s very difficult to use if the containers you intend to run haven’t been made with the don’t-run-as-root idea in mind. I’ve had five containers fail to behave at this point, and it’s officially time to get rid of the naïve approach of trying to achieve the tiniest Venn intersection of technologies ever (I’m exaggerating, of course) and jump over to Docker.

At this point, there are two options (short of the third, which is using an OS other than Fedora… which I won’t do):

  1. Use Docker CE designed for Fedora 31.
  2. Go back to moby-engine and try to fix what went wrong.

At the moment, I’m leaning towards option two. It was an absolute pain, but the pain that comes with running rootless containers has been far underplayed, and at this point the desperation lies with wanting the solution that “just works”.

But for now, we do nothing except sleep.

When It Actually Became Light

There is one issue which drove me to considering podman in the first place, and that is my understanding of where docker-compose sat in the world of both technical and practical considerations. From everything I had read, docker-compose wasn’t to be used in production. Because what I am doing here is largely of educational value, I wanted to be able to walk away and say “Hey, I’ve got some practical experience with Docker in an enterprise context” (even if not for an enterprise).

And there stands the issue. So I thought “Perhaps my introduction to podman will force the issue and make me come to a conclusion.” Unfortunately, one can push too hard in a particular direction, and this one has clearly illustrated that the effort has far exceeded the payoff.

There are plenty of pages which divide the masses with docker-compose, but this gave me pause:

This. Most people shouldn’t use k8s. They don’t need the benefits it brings and they’ll make their devops a hella lot more complicated.

Docker compose is fine 👍🏼. Focus on delivering value to your customers.

However, the question does remain: what is the standing of docker-compose in production, and as a follow-up question, what is the intended use of it or the equivalent “proper” method in a CI/CD context? That’s a question that remains to be answered.

Choices, Choices

This site gave the following perspective, which has convinced me to try the Docker CE approach, given that I haven’t done so before:

Moby is the basis of the Docker platform […] you shouldn’t be using Moby if you’re not interested in building your own container infrastructure from scratch […] TL;DR Don’t use Moby, if you want to use it you should already know why, if you don’t then you probably need Docker.

marksei

So, let’s try the Docker CE approach from this page.

Trying Docker CE

Unfortunately, I found a couple of things:

  1. Even though uninstalling podman and installing docker-ce was easy, I was under the impression that stopping and removing all podman containers and removing all pods would be enough. Yet I find a bunch of s6-supervise processes still hanging around, and I have a suspicion that they’re running my actual Docker containers… on restart: always. Eek.
  2. The Docker containers I previously had listed when using moby-engine seem to have disappeared when I never actually removed them upon uninstalling it. Perhaps it’s an incompatibility with docker-ce, but I wouldn’t have thought so.

Those processes aren’t going away, so I’ll reboot…

Doing so shows two things:

  1. Those s6-supervise processes are still there. As long as they don’t consume resources, I’m not too worried about them, but I’m not sure if they’re remnants of a past installation.
  2. My Docker containers are up! Yay! No extra work apart from rewiring them from the file changes I made when using podman.

However, they don’t appear to be able to communicate internally. One of my containers can’t contact a redis container spun up as part of the same network. That ain’t good. Interestingly, my nginx container is reachable, but doesn’t want to connect to any container ports. Perhaps I missed adding docker0 to trusted in firewall-cmd.

…nope, that didn’t do it. Maybe I have to re-create the containers.

…that didn’t do it, either.

1200 hrs - back at it

So I’m going a little heavy on this anti-Docker thing. To clarify, the goal was initially to experiment and learn about other technologies, and to fix configuration “issues” in the process.

When things stopped working at a fundamental (because I was silly and used a system that I actually relied upon for day-to-day needs), I was forced to make a time-pressured decision on what technology to use, which makes drawing conclusions from the web easier than not. A lot of people are quick to find the next “cool thing”, and Docker is the precipice of that (as evidenced by this post). The truth is that any technology can be insufficient for one’s needs, but for me it’s about balancing:

  1. The satisfaction of learning and curiosity with the requirement of stability and having free time.
  2. Where to draw the line at a relentless need to find something new and ignore what works and is good in the process.

Given the above two points, I will keep this production system up and running with Fedora and docker-ce, and will work out what’s going wrong with it in order to do so. Then, on a separate VM, I will test out using other experimental technologies such as podman.

The Problem At Hand

There are currently internal container connection issues. I know it has something to do with firewalld because running systemctl stop firewalld restores the connection. The issue exists not only with containers talking to each other on the same bridge network within Docker, but also outside of the container. I have an nginx container which functions almost solely as a reverse proxy. It references the proxy_pass server that it contacts by IP address (which is actually that of the Docker host).

When firewalld is up, nginx is reachable but gives a 502 Bad Gateway. Stopping firewalld shows the webpage for the container in question without any issues. Interestingly, the container whose webpage I visit is directly accessible via the Docker host’s DNS and port even when firewalld is up, indicating that this in fact a connection issue that originates from the containers themselves. The only question is: What’s causing it?

My basic understanding of firewalld is that it acts as a “broker” or “director” for the actual firewall rule implementation (which used to be iptables and is now nftables). This post (at the Whitelist docker in firewall heading) indicates that the --add-masquerade command of firewall-cmd has this effect:

[…] will allow docker to make local connections. This is particularly useful when multiple Docker containers are in as a development environment.

Kevin “Eonfge” Degeling

That’s unfortunate, because running that gives:

1
Warning: ALREADY_ENABLED: masquerade

So what could be the issue, then? Googling the exact phrase gives no relevant results, half of which are in Chinese. I suppose the only thing left to do with my relative inexperience with Docker (which seems crazy given how much I’ve learned) is to blow everything away and start anew.

The great thing is that Docker is based on ephemeral containers, so the data is all on my disk anyways (since I learned the hard way not to use Docker volumes irrespective of what the official documentation suggests).

phoenixnap.com, linuxize.com, and digitalocean.com all suggest ways to run a plethora of commands to remove different configuration items including containers, networks, and volumes. Instead, I’m going straight to the source and removing Docker items at their source. The aforementioned official documentation hints at volumes being stored at /var/lib/docker/volumes/. Intuition tells me that I should be able to find other configuration data there, too. Surely enough, /var/lib/docker/ has other lovely things, too:

1
2
3
4
5
6
7
8
9
10
11
12
13
drwx------ 2 root root    24 Oct  2 00:55 builder
drwx--x--x    4 root root    92 Oct  2 00:55 buildkit
drwx--x--x    3 root root    20 Oct  2 00:55 containerd
drwx------ 47 root root  4096 Oct 17 09:39 containers
drwx------ 3 root root    22 Oct  2 00:55 image
drwxr-x--- 3 root root    19 Oct  2 00:55 network
drwx------ 432 root root 36864 Oct 17 11:58 overlay2
drwx------ 4 root root    32 Oct  2 00:55 plugins
drwx------ 2 root root     6 Oct 17 11:58 runtimes
drwx------ 2 root root     6 Oct  2 00:55 swarm
drwx------ 2 root root     6 Oct 17 11:58 tmp
drwx------ 2 root root     6 Oct  2 00:55 trust
drwx------ 22 root root  4096 Oct 17 09:37 volumes

First, I’ll stop all containers and remove Docker itself. I’m specifically not deleting those containers so I can verify that they were contained in the configuration I’m to delete.

1
2
3
4
docker stop $(docker ps -aq)
sudo systemctl disable --now docker
sudo dnf remove docker-*
sudo dnf config-manager --disable docker-*

Just to make triply sure, I’ll make sure old CGroups are enabled, since that could have arbitrarily changed when I started using podman:

1
sudo grubby --update-kernel=ALL --args="systemd.unified_cgroup_hierarchy=0"

And for good measure, the old IT support trick:

1
sudo reboot

Now (and I’m aware that this is a lot of back-and-forth), my research shows that moby-engine really is overwhelmingly popular. I did say I would use docker-ce, but if Moby was working before, it can work now. I’ll remove the docker-ce repo previously added:

1
sudo rm /etc/yum.repos.d/docker-ce.repo

And instead of irreversibly destroying everything, let’s move the docker directory instead of deleting:

1
sudo mv /var/lib/docker /docker_old

A quick find revealed any extraneous directories that could cause issues:

1
sudo find / -type d -name docker

Some I’m not worried about (such as /etc/docker), but others such as the .local/share/containers directory in my home folder must go. They might be from podman. Let’s check that we are in fact using nftables for the firewalld implementation:

1
sudoedit /etc/firewalld/firewalld-server.conf

(by the way, here’s why you should use sudoedit)

And for good measure, one more reboot before starting anew:

1
sudo reboot

And now to get rolling:

1
2
3
4
sudo dnf install moby-engine docker-compose
sudo systemctl enable docker
sudo reboot
docker run hello-world

Did it work?

1
2
Hello from Docker!
This message shows that your installation appears to be working correctly.

Sure did. Now, let’s get check if we have the leftover containers.

1
docker ps -a

Nope, only the hello-world container, which I will now delete:

1
docker rm 1dac

Now, let’s get my Elastic monitoring up and running first. That way, I can use it as somewhat of a checklist for what else I have to get running:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
[...]

Digest: sha256:74ba0f876a05caf1b5501715937474c43b6d8722b0a8bb62a16c9dc8c0e73903
Status: Downloaded newer image for docker.elastic.co/beats/heartbeat:7.9.2
Creating es01 ... 
Creating es01  ... error
Creating hb01  ... 
WARNING: Host is already in use by another container

ERROR: for es01  Cannot start service es01: driver failed programming external connectivity on endpoint es01 (b3542b871c1c3f3e2a4d1058cdc477647ab180e426fa97dd785cCreating kib01 ... error
WARNING: Host is already in use by another container

ERROR: for kib01  Cannot start service kib01: driver failed programming external connectivity on endpoint kib01 (da41fd70e53d746c12e05ec00a4ac7642013c1110bf16dda4Creating hb01  ... done

ERROR: for es01  Cannot start service es01: driver failed programming external connectivity on endpoint es01 (b3542b871c1c3f3e2a4d1058cdc477647ab180e426fa97dd785caf3939998299): exec: "docker-proxy": executable file not found in $PATH

ERROR: for kib01  Cannot start service kib01: driver failed programming external connectivity on endpoint kib01 (da41fd70e53d746c12e05ec00a4ac7642013c1110bf16dda406c2e79c2099983): exec: "docker-proxy": executable file not found in $PATH
ERROR: Encountered errors while bringing up the project.

Hmm. HMM. This Stack Overflow post’s comment suggests:

For anyone who stumbles across this and is using more than one service sharing the same port within your docker compose file, then ensure that you’re using the hostname option: https://docs.docker.com/compose/compose-file/#domainname-hostname-ipc-mac_address-privileged-read_only-shm_size-stdin_open-tty-user-working_dir

robnordon (Stack Overflow)

The thing is that I’m not sharing ports as it suggests. Perhaps something really is conflicting with those ports… es01 uses 9200, and kib01 uses 5601.

1
2
3
4
5
6
7
8
9
% sudo netstat -tulpen
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address           Foreign Address         State       User       Inode      PID/Program name    
tcp        0      0 0.0.0.0:22              0.0.0.0:*               LISTEN      0          31288      890/sshd: /usr/sbin 
tcp        0      0 127.0.0.2:9102          0.0.0.0:*               LISTEN      0          28457      884/bacula-fd       
tcp6       0      0 :::22                   :::*                    LISTEN      0          31290      890/sshd: /usr/sbin 
tcp6       0      0 :::9090                 :::*                    LISTEN      0          25050      1/systemd           
udp        0      0 127.0.0.1:323           0.0.0.0:*                           0          23502      818/chronyd         
udp6       0      0 ::1:323                 :::*                                0          23503      818/chronyd        

That’s concerning. I don’t see anything there. Is it internal to the Docker networks? Does Fedora remember them?

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
% ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: ens192: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
    link/ether 00:0c:29:2e:f6:5c brd ff:ff:ff:ff:ff:ff
    altname enp11s0
    inet 10.0.0.97/24 brd 10.0.0.255 scope global dynamic noprefixroute ens192
       valid_lft 85159sec preferred_lft 85159sec
    inet6 fe80::4ba1:8784:3859:4143/64 scope link noprefixroute 
       valid_lft forever preferred_lft forever
3: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default 
    link/ether 02:42:8c:8d:c9:9a brd ff:ff:ff:ff:ff:ff
    inet 172.80.0.1/24 brd 172.80.0.255 scope global docker0
       valid_lft forever preferred_lft forever
    inet6 fe80::42:8cff:fe8d:c99a/64 scope link 
       valid_lft forever preferred_lft forever
6: br-3cd74143856b: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default 
    link/ether 02:42:05:b9:54:0c brd ff:ff:ff:ff:ff:ff
    inet 172.80.1.1/24 brd 172.80.1.255 scope global br-3cd74143856b
       valid_lft forever preferred_lft forever
    inet6 fe80::42:5ff:feb9:540c/64 scope link 
       valid_lft forever preferred_lft forever
8: veth4624a58@if7: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br-3cd74143856b state UP group default 
    link/ether ce:b4:89:9d:0e:84 brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet6 fe80::ccb4:89ff:fe9d:e84/64 scope link 
       valid_lft forever preferred_lft forever

Nope. Then what is it? I have a feeling that docker-proxy might be the culprit as per the above errors. From this page on redhat.com:

I found the root cause. Someone has created the machine with the file /etc/systemd/system/docker.service.d/lvm.conf which changes the option ExecStart.

Jakub Filak

That file doesn’t exist for me, nor did it for the reply poster. I wonder…

1
/usr/bin/which: no docker-proxy in (/home/ [...]

Ah. I think that’s supposed to be there.

1
2
% sudo find / -name docker-proxy
/usr/libexec/docker/docker-proxy

Surely enough, that’s not in my $PATH. Is it supposed to be? This page suggests that someone had the same issue and fixed it by removing their Docker install and instead going straight to Docker CE… which I don’t want to do. Sigh.

I have, however, just realised that /usr/libexec/docker was one of the directories listed in find before uninstalling earlier… and I didn’t remove it. Maybe I should do that now. And to repeat the process all over again…

1
2
3
4
5
6
7
8
9
10
11
12
13
14
docker stop $(docker ps -aq)
sudo systemctl disable --now docker
sudo dnf remove -y docker-compose moby-engine docker-*
sudo dnf config-manager --disable docker-*
sudo rm -rf /usr/libexec/docker /var/lib/docker
sudo reboot
[...]

sudo dnf install -y moby-engine docker-compose
sudo systemctl enable docker
sudo reboot
[...]

docker run hello-world

And that should work, right?

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
% sudo systemctl status docker
[sudo] password for dylanboyd: 
● docker.service - Docker Application Container Engine
     Loaded: loaded (/usr/lib/systemd/system/docker.service; enabled; vendor preset: disabled)
     Active: failed (Result: exit-code) since Sat 2020-10-17 13:35:30 NZDT; 17s ago
TriggeredBy: ● docker.socket
       Docs: https://docs.docker.com
    Process: 1017 ExecStart=/usr/bin/dockerd --host=fd:// --exec-opt native.cgroupdriver=systemd $OPTIONS (code=exited, status=1/FAILURE)
   Main PID: 1017 (code=exited, status=1/FAILURE)

Oct 17 13:35:30 mediaworker systemd[1]: docker.service: Scheduled restart job, restart counter is at 3.
Oct 17 13:35:30 mediaworker systemd[1]: Stopped Docker Application Container Engine.
Oct 17 13:35:30 mediaworker systemd[1]: docker.service: Start request repeated too quickly.
Oct 17 13:35:30 mediaworker systemd[1]: docker.service: Failed with result 'exit-code'.
Oct 17 13:35:30 mediaworker systemd[1]: Failed to start Docker Application Container Engine.

Oh for the love of everything sacred… Why do I still work with computers? A quick journalctl showed:

1
unable to configure the Docker daemon with file /etc/docker/daemon.json: the following directives are specified both as a flag and in the configuration file: log-driver: (from flag: journald, from file: json-file)

Right. I did do that. That’s why I didn’t get rid of /etc/docker. I specifically set that because I had trouble reading logs in Filebeat. Now it’s just a matter of checking the default log-driver and overriding it. The reason why this tripped me up is that I originally modified the docker.service file itself (knowing full well that something like this would happen), and now that it’s been wiped and reinstalled, I have this issue. However, this brilliant Stack Overflow answer clarifies how to use an override to work around that.

1
2
3
4
5
6
7
% sudo systemctl cat docker.service
[...]

ExecStart=/usr/bin/dockerd \
          --host=fd:// \
          --exec-opt native.cgroupdriver=systemd \
          $OPTIONS

Given that line, one could follow that Stack Overflow answer and create an override using sudo systemctl edit docker. However, I notice that an environment file is defined at /etc/sysconfig/docker which allows setting the --log-driver. That seems much better. Now to remove it from /etc/docker/daemon.json and add the json-file flag in place of the environment file’s journald. We may as well also disable SELinux while we’re in there, since I don’t enable it for sake of it conflicting with core services. Let’s try again.

1
2
3
% sudo systemctl start docker
% docker ps -a
CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS              PORTS               NAMES

Yes! Now… the moment of truth.

1
2
3
4
5
6
7
8
9
10
11
% docker-compose up -d
Creating network "elk_elastic" with driver "bridge"
Creating volume "elk_data01" with local driver
Pulling es01 (docker.elastic.co/elasticsearch/elasticsearch:7.9.2)...
7.9.2: Pulling from elasticsearch/elasticsearch
f1feca467797: Pull complete
[...]

Creating hb01  ... done
Creating es01  ... done
Creating kib01 ... done

Sweet baby Moses. I never thought I would see that ever again. Let’s hook up nginx so we can properly test it.

1
2
3
4
5
6
7
8
9
10
11
12
% docker-compose up -d
Creating network "nginx_docker_nginx_net" with driver "bridge"
Pulling webserver (nginx:)...
latest: Pulling from library/nginx
bb79b6b2107f: Pull complete
111447d5894d: Pull complete
a95689b8e6cb: Pull complete
1a0022e444c2: Pull complete
32b7488a3833: Pull complete
Digest: sha256:ed7f815851b5299f616220a63edac69a4cc200e7f536a56e421988da82e44ed8
Status: Downloaded newer image for nginx:latest
Creating nginx ... done

Kibana works! Oh wow, this is surreal. Now to deploy the remainder of my containers. Kibana’s uptime feature will let me know what I’ve missed.

The only hiccup I’ve faced so far is GitLab saying that I need to create a new account, which is quite concerning given that I have a lot of data on there. I’ll try to diagnose and troubleshoot what the issue could be, but this page indicates that I might have just done something silly while moving directories around after coming from podman.

alertmanager was throwing a bunch of errors along the lines of “explicit ip not provided” in the output of docker logs, so I added this to my config as per this page:

1
2
3
alertmanager['flags'] = {
  'cluster.advertise-address' => "127.0.0.1:9093",
}

That fixed those errors, but I’m still not able to log in as before. I created a backup of my GitLab config, as I’m going to try set a new root password and see if that allows me to get anywhere beyond that. If not, I have my old config to fall back on.

After much trying, I’ve decided to start anew. I thankfully have all of my code on my iMac, so I haven’t lost anything. Now it’s just a matter of pushing it back up to my “new” server and getting the pipelines building.

Drawing a Line

The embarrassing part about all of this is that I’m almost exactly where I started before this whole charade began. The problem was initially network communication issues with Bacula, but Docker was thrown under the bus as a result. That was a mistake; we know that now. The next step will be to see if I can get Bacula and Docker working nicely.

Until next time!

This post is licensed under CC BY-SA 4.0 by the author.