Port mappings are not released

Confirmed! My test steps appear to all be working now, even for privileged ports.

I had some other application listening to the same port I was trying to map to container. Stopping that application solved the problem.

Probably fixed as of beta 11 as said above. Works just fine for me in beta 12 (missed out on beta 11).

I am still having this issue. I am using Docker for Windows Beta 16 installed yesterday. Need some help!!!

S C:\Users\monu\Documents\Visual Studio 2015\Projects\WebApplication3\src\WebApplication3> .\DockerTask.ps1 -Run -Environment Release
ERBOSE: Setting: $env:CLRDBG_VERSION = "VS2015U2"
ERBOSE: Setting: $env:REMOTE_DEBUGGING = 0
ERBOSE: Executing: docker-compose -f ā€˜C:\Users\monu\Documents\Visual Studio
015\Projects\WebApplication3\src\WebApplication3\bin\Docker\Release\app\docker-compose.ymlā€™ -p webapplication3 up -d
tarting webapplication3_webapplication3_1

RROR: for webapplication3 driver failed programming external connectivity on endpoint webapplication3_webapplication3_1 (a7af435eb06b1435969078dac7c5c99216
9a60e3d5b39be0ee254fe3b53db3d): Bind for 0.0.0.0:80 failed: port is already allocated
RROR: Encountered errors while bringing up the project.
un : Failed to start the container(s)
t C:\Users\monu\Documents\Visual Studio 2015\Projects\WebApplication3\src\WebApplication3\DockerTask.ps1:491 char:5
Run
~~~

  • CategoryInfo : NotSpecified: (:slight_smile: [Write-Error], WriteErrorException
  • FullyQualifiedErrorId : Microsoft.PowerShell.Commands.WriteErrorException,Run

Works in beta 12 if you do not map container to a port in use. My port 80 is being used by Apache, so if I map it to 8888, it works. docker run -d -p 8888:80 --name test_web_server nginx

1 Like

I am still having this error on DOcker for mac and Iā€™m on version 1.12. Did anyone find any solutions?

I am also on 1.12, and this is actually the first time i get the error. Didnā€™t happen to me before. I even tried restarting my mac, didnā€™t help!!!

Started happening for me today, right after I upgraded to Version 1.12.0-rc2-beta17 (build: 9779). Never happened to me before. Tried restarting Docker service and then restarting my Mac (OSX 10.11.5), but that didnā€™t help. com.docker.slirp starts listening on some ports as soon as I start Docker service, even when no containers are up. This only happens for some port mappings, not all.

Update on this: Resetting to factory settings seems to have fixed it. :slight_smile:

mind you, factory reset deletes all your images too

I did a factory reset and am still having the issue on 1.12.0-rc3

Edit: homebrew was running a service that had the same port under use. brew service stop on the service allowed docker to work fine.

Still experiencing this on Version 1.12.0-rc4-beta19

Same for me : Version 1.12.0-rc4-beta19 (build: 10258)

I had issues with docker-compose on wildfly and mongodb images with port mapping.
Even after a docker rm $(docker ps -a -q) and a restart of the Docker service, it is still reopening connection on ports 8080 and 27017.

After a check at open files : lsof -nP | grep LISTEN

com.docke 1646 nico   23u    IPv4 0x8e51035cd2420c0f         0t0       TCP *:8080 (LISTEN)
com.docke 1646 nico   24u    IPv6 0x8e51035cc7751da7         0t0       TCP [::1]:8080 (LISTEN)
com.docke 1646 nico   25u    IPv4 0x8e51035ccba8b127         0t0       TCP *:27017 (LISTEN)
com.docke 1646 nico   26u    IPv6 0x8e51035cc7754e07         0t0       TCP [::1]:27017 (LISTEN)

A sudo kill -9 1646 restarts Docker but still doesnā€™t release portsā€¦

Just seen with 1.12.0-rc4-build20 (build: 10404) with the Postgres container and port 5432.

A few runs of deleting all containers and restarting via OSX status bar worked.

Still facing this issue in Version 1.12.0-beta21 (build: 10868). The whole team switched back to Toolbox, which is really sadā€¦ As mentioned above, resetting to factory defaults ā€œfixesā€ it, but itā€™s not an acceptable solution since you lose all the images.

Does anyone have some repro steps for this problem on beta 21? Iā€™ve not been able to reproduce it myself for the last few betas. Iā€™ve run commands like docker run -p 80:80 nginx and docker run -p 5432:5432 postgres and the post mappings are always released when I hit Control+C. For some reason none of our automated tests can reproduce it either :disappointed: If I can discover some good repro steps then I can add the scenario to our test suite.

Is it still the case that restarting Docker.app will not fix the problem, but resetting to factory defaults will? The only significant thing i can think of thatā€™s different between these two operations (apart from losing images) is that restarting Docker.app will restart containers who have settings like --restart always while reset to factory defaults will delete these containers altogether. Could the problem be caused by auto-restarting containers?

The way itā€™s supposed to work is:

  • request to expose a port arrives in the docker daemon
  • docker spawns a process in the VM with a command-line like /usr/bin/docker-proxy -proto tcp -host-ip 0.0.0.0 -host-port 80 -container-ip 172.17.0.2 -container-port 80
  • the docker-proxy binds the port inside the VM and requests the host bind the port by creating a directory:
moby:~# ls /port
README                            tcp:0.0.0.0:80:tcp:172.17.0.2:80
  • when the port is to be unmapped, the docker-proxy process exits
  • the docker-proxy closes a file in /port, causing the mapping to be released.

In the state where it repros Iā€™d be interested in knowing whether the container is definitely stopped (from docker ps), whether the docker-proxy in the VM is still running or not, and whether ls /port still shows the mapping. (BTW be careful with the /port filesystem ā€“ donā€™t do anything other than ls /port or it might fail)

Iā€™m running on 1.12.0 stable and not seeing this problem any more (although I have only stopped/removed/created my containers only a couple of times). But, I have noticed that all the files owned by ā€œdockerā€ in /Database and /port have odd timestamps. For /Database, all files have dates of Jan 1 1970 (timestamps of 0) while the /port files have dates of May 4 2006.

Maybe this doesnā€™t matter, but maybe it is symptomatic of some sort of bug? What happens if the docker-proxy process coredumps? Would the port mapping be released? Or, if the docker-proxy goes into some sort of loop due to memory corruption? When I have had this problem on past betas, only a handful of published ports were stranded (I have 89 ports mapped for around 20 containers) and it was random which ports werenā€™t removed from the /port filesystem. It was a different set each time I encountered this bug.

Probably not related, but just thought Iā€™d mention the odd timestamps (especially for the /port filesystem).

/ # ls -l /port
total 0
-r--r--r--    1 docker   docker           0 May  4  2006 README
dr--r--r--    1 docker   docker           0 May  4  2006 tcp:0.0.0.0:1961:tcp:172.18.0.16:1961
dr--r--r--    1 docker   docker           0 May  4  2006 tcp:0.0.0.0:2587:tcp:172.18.0.25:587
dr--r--r--    1 docker   docker           0 May  4  2006 tcp:0.0.0.0:32780:tcp:172.18.0.4:1961
dr--r--r--    1 docker   docker           0 May  4  2006 tcp:0.0.0.0:32781:tcp:172.18.0.5:1961
dr--r--r--    1 docker   docker           0 May  4  2006 tcp:0.0.0.0:32782:tcp:172.18.0.6:1961
dr--r--r--    1 docker   docker           0 May  4  2006 tcp:0.0.0.0:32783:tcp:172.18.0.7:1961
dr--r--r--    1 docker   docker           0 May  4  2006 tcp:0.0.0.0:32784:tcp:172.18.0.8:1961
dr--r--r--    1 docker   docker           0 May  4  2006 tcp:0.0.0.0:32785:tcp:172.18.0.9:1961
dr--r--r--    1 docker   docker           0 May  4  2006 tcp:0.0.0.0:32786:tcp:172.18.0.10:1961
dr--r--r--    1 docker   docker           0 May  4  2006 tcp:0.0.0.0:32787:tcp:172.18.0.11:1961
dr--r--r--    1 docker   docker           0 May  4  2006 tcp:0.0.0.0:32788:tcp:172.18.0.12:1961
dr--r--r--    1 docker   docker           0 May  4  2006 tcp:0.0.0.0:32789:tcp:172.18.0.13:1961
dr--r--r--    1 docker   docker           0 May  4  2006 tcp:0.0.0.0:32790:tcp:172.18.0.14:1961
dr--r--r--    1 docker   docker           0 May  4  2006 tcp:0.0.0.0:32791:tcp:172.18.0.15:1961
dr--r--r--    1 docker   docker           0 May  4  2006 tcp:0.0.0.0:3587:tcp:172.18.0.26:587
dr--r--r--    1 docker   docker           0 May  4  2006 tcp:0.0.0.0:443:tcp:172.18.0.16:443
dr--r--r--    1 docker   docker           0 May  4  2006 tcp:0.0.0.0:465:tcp:172.18.0.24:465
dr--r--r--    1 docker   docker           0 May  4  2006 tcp:0.0.0.0:587:tcp:172.18.0.24:587
dr--r--r--    1 docker   docker           0 May  4  2006 tcp:0.0.0.0:80:tcp:172.18.0.16:80
dr--r--r--    1 docker   docker           0 May  4  2006 tcp:0.0.0.0:9930:tcp:172.18.0.24:9930
dr--r--r--    1 docker   docker           0 May  4  2006 tcp:0.0.0.0:9931:tcp:172.18.0.24:9931
dr--r--r--    1 docker   docker           0 May  4  2006 tcp:0.0.0.0:9932:tcp:172.18.0.24:9932
dr--r--r--    1 docker   docker           0 May  4  2006 tcp:0.0.0.0:9933:tcp:172.18.0.24:9933
dr--r--r--    1 docker   docker           0 May  4  2006 tcp:0.0.0.0:9934:tcp:172.18.0.24:9934
dr--r--r--    1 docker   docker           0 May  4  2006 tcp:0.0.0.0:9935:tcp:172.18.0.24:9935
dr--r--r--    1 docker   docker           0 May  4  2006 tcp:0.0.0.0:9936:tcp:172.18.0.24:9936
dr--r--r--    1 docker   docker           0 May  4  2006 tcp:0.0.0.0:9937:tcp:172.18.0.24:9937
dr--r--r--    1 docker   docker           0 May  4  2006 tcp:0.0.0.0:9938:tcp:172.18.0.24:9938
dr--r--r--    1 docker   docker           0 May  4  2006 tcp:0.0.0.0:9939:tcp:172.18.0.24:9939
dr--r--r--    1 docker   docker           0 May  4  2006 tcp:0.0.0.0:9940:tcp:172.18.0.24:9940
dr--r--r--    1 docker   docker           0 May  4  2006 tcp:0.0.0.0:9941:tcp:172.18.0.24:9941
dr--r--r--    1 docker   docker           0 May  4  2006 tcp:0.0.0.0:9942:tcp:172.18.0.24:9942
dr--r--r--    1 docker   docker           0 May  4  2006 tcp:0.0.0.0:9943:tcp:172.18.0.24:9943
dr--r--r--    1 docker   docker           0 May  4  2006 tcp:0.0.0.0:9944:tcp:172.18.0.24:9944
... ports 9945-9999 omitted ...

The way I reproduce it is always the same, the docker agent not always fails to release ports.
It can happen quickly or after some hours.

Generally, it happens when I start my docker-compose services (I have a lot of services to start, something like 10 containers) and when I stop them brutally with a command like :

docker stop $(docker ps -a -q); docker rm $(docker ps -a -q); docker volume rm $(docker volume ls -qf dangling=true)

Such behavior will cause Docker to bug quite quickly.
Hope this helps.

1 Like

Have you guys tried this solution? (Basically, reset to factory defaults)

Iā€™ve had this happen on the latest Docker 1.12.1-rc1-beta23. Destroying all the containers and images did not help, I had to reset Docker to factory settings.

Is there a way to manually unmap the ports so we donā€™t have to reset Docker when this occurs?