Create and update a mysql docker image with "up to date" data

Hello everybody,

What I want to do is similar than what is done when you have a maven project and a central repository. Some dev modify a maven project, if those changes are ok then the jar artifact is published on central repository and other dev don’t have to pull the code and compile it, they just pull the final artifact from central repository.

I want to do a similar process but for a docker mysql database. Some dev pushes sql files with modifications into git repo, thoses SQL file are automatically pull on Jenkins and are executed on a docker image. If the sql file execution works fine then the modified image is pushed on our central registry. Other dev just have to pull the last docker image in order to have an up to date database.

I was able to create a Jenkins job which does the trick :

  • pull last mysql image
  • pull last sql files
  • execute sql files on container
  • commit container
  • push image on registry
docker run --name MyAppMysqlContainer -e MYSQL_ROOT_PASSWORD=root -d  OurRegistry/MyAppMysqlImage:latest
docker cp sqlFiles MyAppMysqlContainer:/
docker exec MyAppMysqlContainer /bin/bash -c '/update_database.sh'
docker commit --message "daily build from jenkins" MyAppMysqlContainer OurRegistry/MyAppMysqlImage:${BUILD_ID}
docker tag OurRegistry/MyAppMysqlImage:${BUILD_ID} OurRegistry/MyAppMysqlImage:latest
docker push --all-tags OurRegistry/MyAppMysqlImage 
docker stop MyAppMysqlContainer
docker rm MyAppMysqlContainer
docker rmi -f $(docker images | grep 'OurRegistry/MyAppMysqlImage' | awk '{print $3}')

This job is launch everyday. My issue is that I can see that the image’s size increasing a lot each time a last version is created.

What I inspect the image history into DockerDesktop I see that an instruction is repeated each time I build :
“–datadir /var/lib/mysql-no-volume”.

I think that I don’t build the image properly, that what I have done is conceptually wrong. Do you have any hint ?

You have to understand that the Docker image is a set of layers on top of each other.

If you always download the latest image and run commands in the container before you commit it with the same name, it is like baking pancakes without end. You will always have a new pancake (new layer) without eating any of them. You can’t delete anything from an image, only marking a file as deleted so you won’t see it in the final image.

You shouldn’t use docker commit at all. Use a Dockerfile. You can have a COPY instruction to copy the latest sql files and if nothing else changes, you can benefit from the Docker cache and everyone can download only some kilobytes or megabytes as a replacement of the last layer. If you change anything else in your Dockerfile before the COPY, you can have a larger diff, but in this case, you will eat your pancakes except the ones at the bottom.

1 Like

Hello,

Thank you for your answer. So what I have to do is build from scratch the database image and push it to our registry?

Thank you.

It depends on what you mean by “build from scratch”, because Docker has the “scratch” keyword which means you start without filesystem and metadata.

You have to choose one base image and start your build from that image every time. Use a Dockerfile for that and never install anything manually in a container to save it as an image.