How to build my own LAMP image with persistence

Hi, I am newbie, but really interested in Docker.
I have experience working with LXC containers.
I usually have some kind of web environment for running web applications locally (ex. MAMP, XAMMP …). But I am not satisfied with them. I know how to configure each of these components by myself. but these applications provide limited access.

So I decided to use Docker for creating required image for my projects.

Basic ideas is to have.

  1. A Docker container with the LAMP stack running. I know the best practice is to use separate container for single application, but I really want to keep a single container with LAMP stack to keep one container for one purpose.
  2. Ability to use a host machine folder inside container, to mount my project from the host to the web apps directory inside a docker container.
  3. I need persistence. I want a MySQL database to save its state between launches. But, it would be great if a whole container to preserve its state.

I’ve been working with LXC containers for sometime, in general I need exactly the same features. I use a LXC container as a simple Linux machine: install updates & required packages, configure settings files … exactly the same way I do on a real Linux Machine.
So maybe I am miss something and docker works in another way, but as far is know this is just an extension for LXC containers.

Please suggest what steps should I follow in order to achieve desired result.

Thanks.

I’d recommend doing basically the exact opposite of what you propose:

  1. Add a Dockerfile to your application code that starts FROM a prebuilt language runtime image (I know there are standard python and php images) and COPYs your application code in. Add this file to your source control.
  2. Write a docker-compose.yml file that starts a mysql image, has a build: . container that builds and runs your application image, and (if you need it) includes a prebuilt Apache or Nginx reverse proxy. Add this file to your source control.
  3. When you update your application, run its local tests, then re-run docker build (or docker-compose build) and restart only the application container.

The default database images will have built-in persistence without you doing anything special, or if you want you can cause them to persist their data on the host system using a volumes: block in the Docker Compose YAML file.

There are a couple of reasons to do things this way. It’s much more flexible in terms of deployment (will you have a “real” database server running outside a container somewhere? will you ever want to deploy this on to a system that doesn’t have the source code locally? docker push the working image to a central Docker registry?). Having the two text files that describe how the system is built checked into source control means that if /var/lib/docker or your local Linux VM breaks you can easily reconstruct everything.

It sounds like your proposed path is to build a single “precious” container that’s hand-constructed and has all of the database state, but none of the application code. You’re at high risk of losing that container without a Dockerfile to reconstruct it, and you’re not really gaining anything over just installing the application runtime on the host system.

1 Like

Thank you for the reply, sorry for the response.

Maybe I am still missing something, I’ve read a lot of articles about Docker, but still a little bit confused about all this stuff.

Generally, as I understand. Docker containers are used for deployment of different applications, when you want to test an application in a real environment.

I understand the basic idea. Is to have a READ-ONLY image with a configuration file, so we can boot up required environment very quickly. When our container is running all data is written to the Container Layer, so after rebooting the container all data is lost, I mean the data that was written to the data layer while container was running.

But docker file is used to Copy, setup, configure an environment for an application. For example we have a base Apache image, but our application requires vhosts to be set up, so we put this stuff in a Docker file. Each time a docker container starts up, it performs all commands written in a Docker file.

I have seen a bunch of videos where a static application was deployed. But the COPY command was used to copy source files. So every time I change something in source code, I have to rebuilt and restart docker container.

So it is more about static environment.

Am I right ?

But I want to leverage the Docker infrastructure just to isolate my development environment.

In general to create something like - VM box with LAMP stack + Shared folder with source files. So every time I change source files I can see changes instantly.

Maybe Docker cannot satisfy such requirements and it would be better to use a separate VM, but I have previous experience working with LXC containers, and they work the right way (for me) like a separate VM, but much more lightweight, because of sharing the same Kernel.

I understand that it is better to keep a single container for a single application. In my case I need at least two image :

  1. MySQL
  2. Apache + PHP + Python …

Here are some questions.

  1. Should I use a pair of MySQL + Apache images for each project. So consider I have three projects. Should this results in something like that

{1-Project [MySQL + Apache],2-Project [MySQL + Apache],3-Project [MySQL + Apache]}

= 6 Images are running

Or I should share two running containers between three projects.

  1. In my case I see only one way of persistence is to use volumes argument in order to mount a source code directory ?

I would be grateful for any help, Maybe there are some good videos, articles to read to get better understanding.
Thanks

Not quite. When you run docker build, the filesystem at the end of the Dockerfile is baked into an image. This means that (a) if you do something time-consuming like an apt-get update inside the Dockerfile, it doesn’t get repeated when you launch the image; and (b) if you try to do something like start a daemon inside a Dockerfile, it won’t actually be started when you launch the image.

Correct. (And I’m very used to the C/C++/Java/Go workflow, where you need to rebuild and restart your application anyways, and even the Python things I do in my day job have a package-and-deploy step, so this feels “normal” to me.)

If you add a docker-compose.yml file to your applications’ source directories that declares the two containers, it will be easy to start them up, and you’ll wind up with six running containers in your case. That’s fine. In the broader non-Docker world I feel like sharing a database is normal but it’s less trivial to set up in Docker.

You can also have Docker manage the persistent space itself using the docker volume command without specifically worrying about where it is on the host.

1 Like

Thank you so much for the answer. I have better understanding now.
I have found an article, here is the link http://www.masterzendframework.com/docker-development-environment/

I wonder if this is the right way to build local environment in my case ? And where all basic ideas of Docker are used applied correctly ?

Thanks