I recently migrated to using a CI setup that uses docker machine to start google container instances, and since then I have had issues with inter container networking. Prior to this setup I had a dedicated instance where all the containers ran.
docker machine: 0.12.2
docker (on machine instance): 17.09.0-ce
Docker machine started instances:
coreOS: 1520.6.0
docker: 1.12.6
My CI job has the following process:
- Start an instance to run the build job
- Start a db container
- Start a db config instance
- Later start the build project container to test it.
Container (1) uses a docker socket to start containers on the host (maybe this should be using the api to get docker machine to start them?). When the db container (2) starts the config instance (3) connects to it using the container name. When the project container starts (4) it connects using the ip address derived from docker inspect (this is because of a java issue with having nodot domains when using the name).
Most of the ports are mapped to an external port to aid with debugging. In this environment when the ports are mapped containers are not able to connect directly with each other on those ports. If there is no port mappping communication is fine.
This behaviour is unique to this environment, in every other environment it is possible to connect via the mapped host port OR the internal container port. docker inspect
on different environment yielded only a different file system driver.
I don have a stack overflow post on the issue also
https://stackoverflow.com/questions/46739012/coreos-docker-port-binding-behaviour