Docker Community Forums

Share and learn in the Docker community.

How to handle containers changing IP addresses?

I have docker running on a Debian machine. Inside docker I have an nginx container and a mysql container. In the nginx container I am running 5 websites. 3 wordpress and 2 opencart.

During wordpress and opencard installation wizards, I had to define the ip address of the mysql database. So I used 172.17.0.2 and everything was fine.

My problem is that after restarting the Debian server, the ip address of mysql changed to 172.17.0.3. So of course all the websites stopped working.

So I had to go on all the websites configuration files and change the defined ip address of mysql to fix the issue.

Inevitably at some point this will happen again. So I am wondering how to handle this issue.

Should I set static IP addresses for the containers?
Should I set up a name resolution system?

What’s a good way to handle this issue?

Thanks

Using a container ip, instead of it’s container/service name or network alias, is a call for trouble.
While the names/alisas remain stable, the ip might change with the restart of a container.

But the container name doesnt resolve to an ip. If it did then i would use a name. For example if i ping mysql from nginx it doesnt resolve.

As i know that it works, if done right, I would suggest you start sharing the exact commands or docker-compose.yml that leads to your problem.

this is how I install the whole thing

# update
apt-get update ; apt-get -y upgrade ; apt-get -y dist-upgrade

# prerequisites
apt-get install -y curl

# install docker
curl -fsSL get.docker.com -o get-docker.sh ; sh get-docker.sh

# add docker user
adduser --disabled-password --gecos '' containers ; echo containers:containers | chpasswd

# add user to group
usermod -aG docker containers

# install portainer
docker run -d \
 --restart always \
 --name portainer \
 -h portainer \
 -p 9000:9000 \
 -v /var/run/docker.sock:/var/run/docker.sock \
 -v portainer:/data \
 portainer/portainer

# install letsencrypt and nginx
docker run -d \
 --restart always \
 --name=letsencrypt \
 -h letsencrypt \
 -p 80:80 \
 -p 443:443 \
 --cap-add=NET_ADMIN \
 -e PUID=1001 \
 -e PGID=1001 \
 -v /etc/localtime:/etc/localtime:ro \
 -e URL=domain1.net \
 -e SUBDOMAINS=www, \
 -e VALIDATION=http \
 -e EMAIL=email@here.net \
 -e EXTRA_DOMAINS=domain2.com,domain3.com,domain4.com,domain5.com \
 -v letsencrypt:/config \
 linuxserver/letsencrypt

# install mariadb
docker run -d \
 --restart always \
 --name mariadb \
 -h mariadb \
 -p 3306:3306 \
 -e PUID=1001 \
 -e PGID=1001 \
 -v /etc/localtime:/etc/localtime:ro \
 -v mariadb:/config \
 -e MYSQL_ROOT_PASSWORD=passhere \
 linuxserver/mariadb

Sorry for beeing unclear on what I expected: the docker run commands for letsencrypt and mariadb would have been sufficient. Seeing those commands helps a lot to see what actualy is missing.

Normaly, I would say use mariadb when configuring the database. If this is not sufficient, then you might want to try to add --link mariadb:db on the letsencrypt container or --alias db on the mysql container and retry, but this time use the alias db to access the database.

See: https://docs.docker.com/engine/reference/commandline/network_connect/#use-the-legacy---link-option

Also: docker-compose makes life way easier.

So I tried using only mariadb on the database configuration of opencart and wordpress, but they simply can’t connect to the database. No name resolution there. I’ve also reinstalled the mysql container and added --alias db in the installation command, but I got a message saying unknown flag: --alias. I’ve also read that --link is a legacy option, and I wouldn’t want to be moving backwards.

So I’m starting to think that I’m doing something fundamentally wrong here.

I did some reading around docker-compose, and I’m trying to figure out if this is the way forward, and if I should completely drop the docker run commands, and convert all my scripts to use docker-compose.

You suggested that docker-compose will make my life easier. Could you elaborate how? How would docker-compose help my setup and the situation I’m currently in?

Thanks

Docker compose provides a declarative approach to configure a single or multiple container stack.
Using custom docker networks in compose files commes natural and is easy to do. Each custom network will have it’s own embedded DNS server, which allows all containers in such a network to interact with each other.

If you change something in the compose file the changes will result in a re-creation of the container with the new configuration. With docker run, you would need to delete the container and recreate it with commands.

One of the most important parts is: you can treat a compose.yml same as code and check it into a scm. Orchestration of those containers boils down to a single docker-compose up -dordocker-compose down - this is clearly less error prone, don’t you think?

From you reply, I understand the benefit of using docker-compose from an administrative perspective. It makes administration of multiple containers easier.

But does it help with name resolution?

Creating a custom network using docker-compose will also enable an embedded DNS server? Whereas with docker run commands it wont?

I have no problem converting everything to docker-compose, but right now I need to handle this changing of IP addresses of the containers.

If you were to set up nginx and mysql, how would you handle the issue I’m facing right now? What would your compose.yml look like?

https://composerize.com/ will help me with the conversion but first I’d like to understand the benefit of docker-compose to my current situation.

The default bridge network actually does not support name resolution, but a user-defined network does.

Containers connected to the default bridge network can communicate with each other by IP address. Docker does not support automatic service discovery on the default bridge network. If you want containers to be able to resolve IP addresses by container name, you should use user-defined networks instead. You can link two containers together using the legacy docker run --link option, but this is not recommended in most cases.

I don’t know why the default network would not support name resolution, but I’m sure there are reasons. Now that I understand why it’s not working, I know what to do.

Thanks

1 Like

Oh, alias does only work for custom networks - didn’t knew that.

I stopped using docker run to start containers way before --link was flagged as depricated. It worked, so I expect it’s replacement --alias would do the same. Obviously its implementation is differnt.

At least with docker-compose/Swarm a private network is always created, so name resolution never was an issue there.

I fixed my problem by making a new network with my custom IP, later just using container name as domain,
applied on Node-red container with Mosquitto container by go inside each one and click join network on Portainer