Can't change container

Hello.
Need some help for noobie. Please help me to understand what I’m doing wrong?
Debian 10, docker 20.10.8, build 3967b7d

  1. I have an image, locally
# docker images
test-image
  1. Starting container from image
# docker run -d --name test --restart-always test-image
13abc3dab1382192bcd13
  1. Enter container, install nano, make changes in configuration file
# docker exec -it test bash
apt-get install nano
...
# nano default.conf
> 111
  1. Exit container
    # exit

  2. Commit changes to create new image
    # docker commit 13abc3dab1382192bcd13 newtest-image

  3. Check

# docker images
newtest-image
test-image
  1. Stop current and start new container
# docker stop test
# docker rm 13abc3dab1382192bcd13
# docker run -d --name newtest --restart-always newtest-image
  1. Enter newtest container and look for changes
# docker exec -it newtest bash
# nano
nano presents
# cat default.conf
111
  1. Double check
# docker cp newtest:/etc/default.conf ./
# cat default.conf
111

All changes successfully committed to new container
BUT
daemon which starts in/with container (nginx, posgres, mysql, … etc) doesn’t see changes in config file. It starts with default.conf = empty_space.

I think this can be because daemon reads config file from another location. But i afraid if there can be another situation - layers. And all what was done located not on daemon layer, so it can’t see changes in new committed container.

Thank you in advance for any help.

You might want to acquaint yourself with how to write a Dockerfile and build your image based on the Dockerfile. Otherwise, your image built won’t be reproducible.

If a file is overwritten in a layer that exists in a previous layer, the complete new file gets added to the new layer and marked as update. Only the updated file will be visible/available in the container filesystem.

Furthermore, if a file is deleted in a layer that exists in a previous layer, it doesn’t get deleted in the previous layer, it just gets marked as deleted and won’t be visible/available in the container filesystem.

A daemon started in a container will stop with the container - if you commit such a container as image and create a new container from it, the daemon will still be stopped. If a container should start a daemon or another process, you need to write an entrypoint script which handles everything required to generate configurations based on environment variables and then finally start the main process, which MUST be a foreground process (otherwise the container is not kept alive). Just to be sure: you are aware that running different services in a single container is an anti-pattern, right?

Thanks a lot for your help!)) This is what I’m looking for: “entrypoint.sh”. Now I can control all the process. I didn’t knew about this file, I started to learn docker 5 days ago )

If you are interested in learning more, I can highly suggest this free self-paced training: Introduction to Containers. It will provide you with a good understanding of docker concepts and knowledge about how things are done in docker.

Don’t be intimated by the number of slides, as most slides can be understood rather quickly. Just make sure to perform the hands-on exercises.

Cool website. Thank you one more time ) I will read them. And Kubernetes are there too. Just not sure about the time enough to learn these all ))