Docker Community Forums

Share and learn in the Docker community.

Confused about Docker Commit not saving image

The following is likely a misunderstanding on my part, so any help is appreciated.

I am using the official wordpress:latest image. By default, the docker compose file creates two volumes - one for the database and one for files. I am playing around with running these directly within the container, so have removed the volume statements. For the purposes of this post, let’s not drill down on why :slight_smile: - separate topic. Suffice to say that the instances that do NOT use volumes seem to be much quicker.

After making some initial changes to my WordPress instance (add/remove plugins, etc) I am trying to create images using Docker Commit (one for each container). The commit commands run without errors, the images are created and I can create a docker compose file that successfully launches WordPress from these two new images.

However, the resultant WordPress environment never reflect the changes I made (adding plugins, etc). I just get prompted again as if I am at a fresh WordPress install i.e. the state before I made my plugin changes.

Since I am just starting out with Docker in earnest, I’d like to understand why this is happening. I see a couple of potential explanations.

  1. Do I need to pause or stop containers before I commit to an image or can I apply it to a running container?

  2. Are there any issues with committing to an image using the same image name i.e. essentially updating the image, rather than creating a new one?

Thank you.

You should never run docker commit.

To answer your immediate question, containers that run databases generally store their data in volumes; they are set up so that the data is stored in an anonymous volume even if there was no docker run -v option given to explicitly store data in a named volume or host directory. That means that docker commit never persists the data in a database, and you need some other mechanism to copy the actual data around.

At a more practical level, your colleague can ask questions like “where did this 400 MB tarball come from, why should I trust it, and how can I recreate it if it gets damaged in transit?” There are also good questions like “the underlying database has a security fix I need, so how do I get the changes I made on top of a newer base image?” If you’re diligent you can write down everything you do in a text file. If you then have a text file that says “I started from mysql:5.6, then I ran …” that’s very close to being a Dockerfile. The syntax is straightforward, and Docker has a good tutorial on building and running custom images.

When you need a custom image, you should always describe what goes into it using a Dockerfile, which can be checked into source control, and can rebuild an image using docker build.

For your use case it doesn’t sound like you actually need a custom image. I would probably suggest setting up a Docker Compose YAML file that described your setup and actually stored the data in local directories. The database half of it might look like

version: ‘3’
services:
db:
image: ‘mysql:8.0’
volumes:
- ‘./mysql:/var/lib/mysql/data’
ports:
- ‘3306:3306’
The data will be stored on the host, in a mysql subdirectory. Now you can tar up this directory tree and send that tar file to your colleague, who can then untar it and recreate the same environment with its associated data.