Docker build fails incrementally

Hello guys,

I have no clue whats going on.
If i try to build my Dockerfile with docker build file I get the following error:

Sending build context to Docker daemon   2.56kB
Step 1/15 : FROM nextcloud:25.0.3
 ---> 3539f97df93a
Address already in use

If i now restart docker I’m able to get a step further:

Sending build context to Docker daemon   2.56kB
Step 1/15 : FROM nextcloud:25.0.3
 ---> 3539f97df93a
Step 2/15 : RUN apt-get update
 ---> Using cache
 ---> c9b9b1a37590
Address already in use

I already stopped all containers, but thats not helping.
Any ideas?

greetings

The “Address already in use” error message suggests that the issue you’re encountering is related to network ports. When you run a Docker container, it maps specific network ports on the host machine to the same ports within the container. If the port is already in use on the host, the container cannot bind to it and you’ll receive the “Address already in use” error.

You can first try running the following command to stop all the containers:

docker system prune

Check which port is in use on the host machine by running the following command:

lsof -i -P -n | grep LISTEN

If there’s a conflict between the ports exposed by the Nextcloud image and the ports in use on the host, you can either change the host ports that are in use or reconfigure the Docker container to use different exposed ports. You can do this using the -p option when you run the docker run command.

Hej,
Thanks for the reply. Within the docker file is no exposed port. So I don’t now which port should be conflicting. Furthermore I’m not running the container. It’s only a build.

Any suggestions on that?

Using

lsof -i -P -n | grep LISTEN

Shows nothing. But I‘m still facing the above described issues. Any help appreciated.

By any chance, are you using --network=host in your build command, and does the offending RUN instructions actually start a process that tries to listen on a port that is already bound?

Please share the content of your Dockerfile and provide the exact command you are using to execute the build(s), to enable us to reproduce the issue.

Another idea that came to mind:
By default, build containers (each layer uses a new one) are attached to the default bridge network. The default bridge network uses the subnet cidr 172.17.0.0/16 and allows 65.534 ips. Is it possible you experience a collision between a lan network and the container network?

Hej,

Thanks for your time. I don‘t think so, that im using the host mode. Here‘s my Dockerfile

 FROM nextcloud:25.0.3
 RUN apt-get update
 RUN apt-get install -y ghostscript
 #solve issues with imagemagick
 RUN apt-get install -y libmagickcore-6.q16-6-extra
 
 RUN apt-get install -y software-properties-common
 #RUN add-apt-repository universe && apt upgrade
 RUN apt-get install -y ffmpeg
 
 RUN apt-get -y install ocrmypdf 
 RUN apt-get -y install tesseract-ocr 
 RUN apt-get -y install tesseract-ocr-deu

I run it via

docker build -t “nextcloud:local“ .

I have already tried to stop all containers and remove all networks. But it still conflicts. There’s no local networks with this ip-range.

Best regards

Your Dockerfile just installs a bunch of packages, but doesn’t install anything that might start a service. On a second though, your issue does not look like an already bound port, it appears to be a double spending of ip’s. For instance if you are using Docker Desktop for Windows, it could perfectly happen that the WSL network collides with the Docker network.

You also might to take a look at Best practices for writing Dockerfiles | Docker Documentation in regard on how to optimize apt-get usage while creating images.

Can you check what happens, if you use docker build --network=host -t nextcloud:local .

1 Like

Hej,

I’m using a Ubuntu server version.

Interesting, it is working with —network=host . Thanks a lot!

I‘ll have a look on your link!

Do know, how to find the root of my problem?

This was just a test. It does not solve the general problem. Though, at least you have a workaround now.

In order to get a better understanding I need you to share the outputs of following commands:

  • cat /etc/docker/daemon.json (if it exists)
  • docker info
  • docker network inspect bridge
  • ip address show scope global
  • ip route
  • snap list docker
  • dpkg -l | grep docker

Hej,

thanks a lot: I gathered the information.

cat /etc/docker/daemon.json

{
"ipv6": true,
"fixed-cidr-v6": "fe80::/64"
}

docker info :

Client:
 Context:    default
 Debug Mode: false
 Plugins:
  buildx: Docker Buildx (Docker Inc.)
    Version:  v0.10.2
    Path:     /usr/libexec/docker/cli-plugins/docker-buildx
  compose: Docker Compose (Docker Inc.)
    Version:  v2.16.0
    Path:     /usr/libexec/docker/cli-plugins/docker-compose
  scan: Docker Scan (Docker Inc.)
    Version:  v0.23.0
    Path:     /usr/libexec/docker/cli-plugins/docker-scan

Server:
 Containers: 21
  Running: 10
  Paused: 0
  Stopped: 11
 Images: 86
 Server Version: 23.0.1
 Storage Driver: overlay2
  Backing Filesystem: extfs
  Supports d_type: true
  Using metacopy: false
  Native Overlay Diff: true
  userxattr: false
 Logging Driver: json-file
 Cgroup Driver: systemd
 Cgroup Version: 2
 Plugins:
  Volume: local
  Network: bridge host ipvlan macvlan null overlay
  Log: awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog
 Swarm: inactive
 Runtimes: io.containerd.runc.v2 runc
 Default Runtime: runc
 Init Binary: docker-init
 containerd version: 31aa4358a36870b21a992d3ad2bef29e1d693bec
 runc version: v1.1.4-0-g5fd4c4d
 init version: de40ad0
 Security Options:
  apparmor
  seccomp
   Profile: builtin
  cgroupns
 Kernel Version: 5.15.0-60-generic
 Operating System: Ubuntu 22.04.1 LTS
 OSType: linux
 Architecture: x86_64
 CPUs: 4
 Total Memory: 7.662GiB
 Name: nucy
 ID: XXXX:XXXX:XXXX:XXXX:XXXX:XXXX:XXXX:XXXX:XXXX:XXXX:XXXX:XXXX
 Docker Root Dir: /var/lib/docker
 Debug Mode: false
 Registry: https://index.docker.io/v1/
 Experimental: false
 Insecure Registries:
  127.0.0.0/8
 Live Restore Enabled: false

docker network inspect bridge:

[
    {
        "Name": "bridge",
        "Id": "xxxxxxxxx",
        "Created": "2023-02-11T20:16:33.29201866+01:00",
        "Scope": "local",
        "Driver": "bridge",
        "EnableIPv6": true,
        "IPAM": {
            "Driver": "default",
            "Options": null,
            "Config": [
                {
                    "Subnet": "172.17.0.0/16",
                    "Gateway": "172.17.0.1"
                },
                {
                    "Subnet": "fe80::/64",
                    "Gateway": "fe80::42:c6ff:fe91:46f9"
                }
            ]
        },
        "Internal": false,
        "Attachable": false,
        "Ingress": false,
        "ConfigFrom": {
            "Network": ""
        },
        "ConfigOnly": false,
        "Containers": {},
        "Options": {
            "com.docker.network.bridge.default_bridge": "true",
            "com.docker.network.bridge.enable_icc": "true",
            "com.docker.network.bridge.enable_ip_masquerade": "true",
            "com.docker.network.bridge.host_binding_ipv4": "0.0.0.0",
            "com.docker.network.bridge.name": "docker0",
            "com.docker.network.driver.mtu": "1500"
        },
        "Labels": {}
    }
]

ip address show scope global


2: eno1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether c0:3f:d5:6d:17:40 brd ff:ff:ff:ff:ff:ff
    altname enp0s25
    inet 192.168.177.2/24 metric 100 brd 192.168.177.255 scope global dynamic eno1
       valid_lft 821320sec preferred_lft 821320sec
    inet6 fd00::xxxx:xxxx:fe6d:1740/64 scope global dynamic mngtmpaddr noprefixroute 
       valid_lft 7046sec preferred_lft 3446sec
    inet6 2a02:[deleted]:xxxx:xxxx:fe6d:1740/64 scope global dynamic mngtmpaddr noprefixroute 
       valid_lft 7046sec preferred_lft 3446sec
3: wlp2s0: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
    link/ether b4:6d:83:eb:f7:35 brd ff:ff:ff:ff:ff:ff
6: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default 
    link/ether 02:42:c6:91:46:f9 brd ff:ff:ff:ff:ff:ff
    inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
       valid_lft forever preferred_lft forever
9: br-9cd504b14774: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default 
    link/ether 02:42:19:2b:53:ec brd ff:ff:ff:ff:ff:ff
    inet 172.25.0.1/16 brd 172.25.255.255 scope global br-9cd504b14774
       valid_lft forever preferred_lft forever
15: br-90a1c2e867eb: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default 
    link/ether 02:42:a2:86:b3:6b brd ff:ff:ff:ff:ff:ff
    inet 192.168.48.1/20 brd 192.168.63.255 scope global br-90a1c2e867eb
       valid_lft forever preferred_lft forever
551: br-c11a26d5c701: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default 
    link/ether 02:42:a0:ae:82:ab brd ff:ff:ff:ff:ff:ff
    inet 172.41.0.1/26 brd 172.41.0.63 scope global br-c11a26d5c701
       valid_lft forever preferred_lft forever
556: br-b2c4f423acf6: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default 
    link/ether 02:42:7b:50:5e:2b brd ff:ff:ff:ff:ff:ff
    inet 172.18.1.0/16 brd 172.18.255.255 scope global br-b2c4f423acf6
       valid_lft forever preferred_lft forever
    inet6 fd00:1::1/64 scope global 
       valid_lft forever preferred_lft forever
557: br-003cf2c86565: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default 
    link/ether 02:42:6c:ea:a9:b7 brd ff:ff:ff:ff:ff:ff
    inet 172.19.0.1/16 brd 172.19.255.255 scope global br-003cf2c86565
       valid_lft forever preferred_lft forever
558: br-dd935428f392: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default 
    link/ether 02:42:21:ca:d2:83 brd ff:ff:ff:ff:ff:ff
    inet 172.20.0.1/16 brd 172.20.255.255 scope global br-dd935428f392
       valid_lft forever preferred_lft forever
559: br-08c779479414: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default 
    link/ether 02:42:f3:25:f5:98 brd ff:ff:ff:ff:ff:ff
    inet 172.40.1.0/16 brd 172.40.255.255 scope global br-08c779479414
       valid_lft forever preferred_lft forever
610: br-5c6b3a9a3072: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default 
    link/ether 02:42:4c:72:3a:bd brd ff:ff:ff:ff:ff:ff
    inet 172.30.9.1/29 brd 172.30.9.7 scope global br-5c6b3a9a3072
       valid_lft forever preferred_lft forever

ip route

default via 192.168.177.1 dev eno1 proto dhcp src 192.168.177.2 metric 100 
172.17.0.0/16 dev docker0 proto kernel scope link src 172.17.0.1 linkdown 
172.18.0.0/16 dev br-b2c4f423acf6 proto kernel scope link src 172.18.1.0 
172.19.0.0/16 dev br-003cf2c86565 proto kernel scope link src 172.19.0.1 
172.20.0.0/16 dev br-dd935428f392 proto kernel scope link src 172.20.0.1 
172.25.0.0/16 dev br-9cd504b14774 proto kernel scope link src 172.25.0.1 linkdown 
172.30.9.0/29 dev br-5c6b3a9a3072 proto kernel scope link src 172.30.9.1 
172.40.0.0/16 dev br-08c779479414 proto kernel scope link src 172.40.1.0 
172.41.0.0/26 dev br-c11a26d5c701 proto kernel scope link src 172.41.0.1 
192.168.48.0/20 dev br-90a1c2e867eb proto kernel scope link src 192.168.48.1 linkdown 
192.168.177.0/24 dev eno1 proto kernel scope link src 192.168.177.2 metric 100 
192.168.177.1 dev eno1 proto dhcp scope link src 192.168.177.2 metric 100 
192.168.177.2 dev eno1 proto dhcp scope host src 192.168.177.2 metric 100 

snap list docker

error: no matching snaps installed

dpkg -l | grep docker

ii  docker-buildx-plugin                  0.10.2-1~ubuntu.22.04~jammy             amd64        Docker Buildx cli plugin.
ii  docker-ce                             5:23.0.1-1~ubuntu.22.04~jammy           amd64        Docker: the open-source application container engine
ii  docker-ce-cli                         5:23.0.1-1~ubuntu.22.04~jammy           amd64        Docker CLI: the open-source application container engine
ii  docker-ce-rootless-extras             5:23.0.1-1~ubuntu.22.04~jammy           amd64        Rootless support for Docker.
ii  docker-compose-plugin                 2.16.0-1~ubuntu.22.04~jammy             amd64        Docker Compose (V2) plugin for the Docker CLI.
ii  docker-scan-plugin                    0.23.0~ubuntu-jammy                     amd64        Docker scan cli plugin.
ii  python3-docker                        5.0.3-1                                 all          Python 3 wrapper to access docker.io's control socket

I guess your fixed-cidr-v6 is the problem: it should use an ULA range, and not a link local address range.

I suggest for the sake of testing to deactivate ipv6 and test whether it changes anything. If it does reconfigure the fixed-cidr-v6 range to use ULA (a 64bit subnet with a fd00 prefix).

I never tried to use ipv6 with docker, so I won’t be able to answer follow-up questions in regard of ipv6.

Note: do not use the 2001:db8::/64 range, like some people do, because they see it in the docs or in a blog post. The prefix 2001:db8 is reserved for documentation to illustrate the ipv6 usage, but is not mend to be used.

Note2: If I am not mistaken the ULA needs to be a different one than the one from your network. You would need to add a route to the ULA you defined as fixyed-cidr-v6 rage in your network router via the docker host’s ULA fd00::xxxx:xxxx:fe6d:1740. I am unsure if you need to enable ipv6 routing or whether it works out of the box. Please report back whether it was necessary :slight_smile:

1 Like

ah, i see. I have used the ULA range, since i did not want to use the 2001-range. Thanks for your advice. I’m using now a fd00:1::/64 as a subnet + ip route in my router. I think that you can use the the same subnet (fd00::/64). But im not sure atm.
Right now, it seems, that you’re right concerning my issue. A thousand thanks!

Haven’t found a nice guide, which explains ipv6 +docker nicely… To me it seems, that most of the people out there are still confused by ipv6 :smiley:

Edit: Subnet fd00::/64 and no ip route is also working.

Thank you for keeping us updated :slight_smile:

Did you have to remove the default bridge network? I am just asking, because docker networks are generally immutable. I am unclear whether it got replaced, by the changed fixed-cidr-v6 (and docker daemon restart).

I am actually surprised the ULA subnet can be the same, but even better it if works, than there is no need to tinker with routes.

since I’m using docker-compose, the containers aren’t attached to the default bridge. Instead docker-compose creates its own networks for new stacks being deployed.

Thanks a lot for your help!