Pointers for a newbie

I started with Docker Desktop for Windows rather successfully - I’d say.

I need to build a project that is / will be based on a WordPress backend (for now) and I have tried out the official WP image.

It worked surprisingly well - but :slight_smile: I’m not too happy, nevertheless.

The PHP version is 7.4, SSL support is possible but as I don’t really have obvious access to the built-in Apache somewhat complicated - and I have to add PhpMyAdmin anyhow I figured I’d try something different and would be grateful for pointers from the experts.

[ Background Info: I’m not a Linux admin at all but am not afraid of the cli. The purpose of using docker for the project however is to collaborate with colleagues that are even less of a Linux-admin than me.
I’m ok with the administration of my local Apache servers on Windows and Mac and also tinker with the occasional configurations of hosted servers. But I really would like not to have to invest time in server support.]

To get a feeling of how to best work with docker I would like to do most of the configuration with composer yaml files and I’m trying to find a way to setup an image with PHP, PhpMyAdmin, Maria DB and Apache on Docker Desktop for windows. (I’m not sure if that is a good idea, but I would think that mounting php-extension -folder as well as Apache config on the host systems would be profitable)

I’m running wsl2 and I installed Ubuntu there and made Docker Desktop use it - as I had lots of trouble getting my volumes mounted on windows.
And I read somewhere that this would help. It did not immediately, but I finally have it working (somehow) meaning the da files and the installation finally showed up in my project sub directories as specified, but the volumes now are not to be found in Docker Desktop… - so I’m really experimenting, and the web provides outdated and contradictory help.
The mounting of volumes in composer cost me lot of time, and I’m still not 100% sure if I have it right.

My goal would be to configure the aforementioned setup in a composer file, finetune it, test in on Windows and Mac and finally deliver the container to my teammates. Finetuning a lot on the cli after composition of the container is something I’d like to avoid as much as possible.

I can and will do my trial-and-error work, but I somehow feel that the examples in the composer documentation often are extremely basic and lack context. The quick start examples also don’t show a lot about how to configure the images included (e.g. with extensions) so if someone has examples with a little explanation attached, I would be incredibly grateful.

(And btw. unpacking a wp-zip into a www folder and setting config.php is something I do not need an image for, even if it is nice to see how it works, it might even be that in the end we do not need/use wp.)

Thank you for reading.

What do you mean by that? How is it complicated?

Mounting PHP extensions is definitely not a good idea. I would not mount from config files either, but you can still do that. note that if you are on Windows you will have a virtual machine in which the containers are running. So mounting files can be slow and you can also have problems with permissions.

An other reason why you should build an image with your preferred configuration. If you learn how you can write an entrypoint and command and use environment variables to generate the final configuration files, you can parameterize your container without the need of mounting files which can be a source of problems especially in case of Docker Desktop due to the fact that it runs a virtual machine. This is how I generate the configuration files of my HTTPD image: httpd24-docker-image/apache2/bin at 3e388ffe8a22186922b77a77fa623bbd61ad5f83 · itsziget/httpd24-docker-image · GitHub

Maybe you need a much simpler solution so you don’t need to write as complicated script as I wrote.

Docker Desktop has its own WSL2 distribution. If you installed an Ubuntu distribution, the containers will still run on Docker’s own distribution and only the Docker client will run inside the Ubuntu distribution. Did you know that?

By the way I started an PHP Composer example project on Github 7 months ago , but I could not finish it so I don’t even remember if it works or not: GitHub - itsziget/php-composer-with-docker: An example project to show how you can use PHP Composer with Docker

Probably not, because I realized I had a solution which worked for me because I already initialized the project so compose update worked but not composer install. The point is that I run composer in a container which based on the same image that the project I want to run and I pull the dependencies to a shared folder which will be mounted by the PHP project. I still want to finish that example project, but I don’t know when I will have time :frowning:

@rimelek: Thank you very much for your answer. I’ll try learn from your examples.

Let me start with my general idea/hope: As I said, I’m not a Linux user. As I started programming in 1983 on a ZX81 I’m not afraid of console commands, and I get by with using them. But I’m not proficient at all with shell scripts - actually I have never written one and can’t really remember the last time I used one. I understand the concept, and it surely is no “rocket science” it is just something I normally do not need and therefore would be happy to use as little as possible.

I figure that I have to use/learn the composer options. As far as I understand I can even integrate a lot of options into the services I configure there and even commands. Once I have the one or two containers up and running that I currently need I likely will not need to work with this for months, and when the need for another one arises I likely will have to make some adjustments to my yaml, and I will have comments in there and if it works out like I hope, I will concentrate on my ‘real work’ again.
My experience is, that it is very hard to remember the sequence of shell commands and their details in year even if I understand them now. The less I have to use them the better.

With the WordPress image, there is no Apache service in the yaml, that I can configure there and also no sub-container in Docker Desktop to start a terminal from. I’ve seen examples where it seemed to me, Apache config folders have been mounted and as I’m used to configuring my local Apache through those files I hoped to do it this way. More or less the way I’m used to.
That’s what I meant by no obvious access and complicated, I would better have said that it works differently from what I expected, and I have to change my way of working with it.

My idea was/is that I have a sub-container for every relevant service in the Docker Desktop, meaning in my case: Apache, PHP, the DB and likely PhpMyAdmin. I actually would prefer not to have the WordPress installation / folders in there, as I’m very much used to work with them and would for example prefer to have standard wp-config file and not one that is populated with Docker variables.

Regarding the virtual machine in windows: I understand that it is unavoidable, but with WSL2 it is easily fast enough for my purposes. The problem with mounting volumes seemed not really to be an issue once I found out how to do it - which took me ages. Mostly because of a lack of good examples and too much outdated information on the web. With most of my tries I received an error stating that path must be absolute, or that something is not accessible. Almost all solutions I found advised me to check some boxes in Docker Desktop to enable drive access but those aren’t available when running under WSL2. I then found an answer that explained that when you don’t have a default distribution in WSL (which I did not have) drives are not accessible. As every research about WSL2 pointed out that they should be and I couldn’t find a way to successfully mount the volumes where I wanted them, I finally decided to install ubuntu inside WSL and enabled integration with my default WSL distro in Docker Desktop settings.
It did not solve things immediately but after another series of tries at a different syntax it finally worked. At least mostly. I might uninstall Ubuntu and try again.

Actually I didn’t really know that, as I couldn’t really find a clear explanation of what ‘Enable integration with my default WSL distro’ does, but I deduced that from the containers in Docker Desktop. I’m also not sure if I understand what you mean by: “Docker Desktop has its own WSL2 distribution”. I guess that means that Docker Desktop runs its own Linux inside WSL2 alongside the Ubuntu I installed?
As I understand it the default distro Docker uses is very lean and I right now see no need to tinker with that, but as I’m trying to learn: If I add e.g. Ubuntu as a service in my yaml, will that be used instead of the default distro in Docker Desktop or as an addition / on top?

Here are some things I would like to do with my composer configuration, and I don’t know what is possible and could not yet find a solution to:

  1. Have the container run on / be reachable under 127.0.0.127 (for example) instead of 127.0.0.1 from the host.
  2. Have Apache running a virtual host with SSL (self-signed) as ‘project.my-domain.eu’ (Letting hosts files on the host system point to 127.0.0.127) - it should point to a sub folder of www. And configure some other stuff like http2 etc.
  3. Have PHP 8.2 running with some extensions (I think I found a promising example for that, just cannot find it again right now…)
  4. Have PhpMyAdmin running with the Maria DB - any default should do
  5. Have Maria DB running and being able to reach it from the webserver.

Does that make sense?

Another thing: Most of the answers on how to use docker containers with SSL advise the use of a reverse proxy. I don’t really understand that. For one, it’s not really hard to setup Apache to do so, and for the other, isn’t it one of the advantages of using containers to mimik a later production environemt as closely as possible? And usually there will not b a revers proxy in my production environments…

I hope you are not describing how you build your image. I sincerely hope that you created a Dockerfile to build your image and do not start a container, run arbitrary commands and then use docker commit to create an image of it.

Rule of thumb: If the components can interact with each other over the network, then they can run in separate containers.

Correct. WSL2 uses a system vm. Instead of interacting with the vm directly, you do interact with distributions. In case of your Ubuntu distribution, directly in its terminal, or in case of the docker-desktop distribution via network integration .

Correct, the docker-desktop WSL2 distribution is lean and there is no reason to directly tinker with it. It will provide the container runtime, which is the runtime for all your deployed container workloads.

With yaml, I assume you mean a compose file, which is used to orchestrate a set of services (~=container), network and volume configurations. It can not be used to configure the docker-desktop distribution, but deployments will be deployed into the docker-desktop distribution. As @rimelek aready said, your Ubuntu WSL2 distribution is just the client-cli part of docker, the container runtime (The server-backend) is running in the docker-desktop distribution.

Your other questions:
#1 in your situation, there is no need to pin container IPs. In docker, user defined networks have dns-based service discovery. Services of another container can be reached by the service-name (from the compose file) and the container port of the application.
#2 I am not exactly sure what this is supposed to mean, but you can use volumes mounted to one or more containers to share files amongst the containers.
#4 looks good to me
#5 looks good to me

The reverse proxy approach is used to have a central entry point to all your containers, manage the configuration of all certificates at a single place and delegate TLS termination to it. This way the containers don’t have to deal with it themselves.

In the last 20 years, I have not seen a single enterprise customer who would not use a loadbalancer/reverse proxy in front of the web application. If you have no use case for it then feel free to not use a reverse proxy.

In my homelab, I would not want to miss out on my Traefik reverse proxy. It takes care of generating and updating Letsencrypt wildcard certificates, and forwards the traffic to target containers based on domain names. Traefik’s reverse proxy rules are configured by adding labels to the container configuration - the reverse proxy rules get applied in Traefik when the container is started and removed when the container is stopped.

Some additional notes

If you want to work with Linux containers, you are going to write more shell scripts. Not necessarily in a script file, but at least in a Dockerfile so you describe the installation process and variables, some metadata in the Dockerfile and also copy some configuation files into the image. Then you caan share that with your colleagues and build an image the same way or you can push the image into a registry and let your colleagues to pull the same image that you created.

Composer or Compose?

It depends on the image variant you choose to use. There are images in which PHP is an Apache HTTPD module and other images in which you don’t have Apache HTTPD, because the WordPRess image contains only PHP-FPM which will be used by an other container that runs Apache HTTPD. This is exactly why I have created my HTTPD image to make the connection settings easier between HTTPD container and the PHP containers. The official HTTPD image has its configurations in `/usr/local/apache2/conf`` but it can be configured differently in other images.

What do you mean by sub-container?

If I understand what you want (accessing the website from the host) then change the port mappings from host_port:container_port to host_ip:host_port:container_port, but Docker should listen on all IPs by default.

You can add your IP and domain name to your hosts file if that is the question. It will not point to anywhere only to your webserver, but you can configue the webserver to use any folder.

The description of the official PHP image contains how you can add php extensions. You can also use an image that contains those extensions (as you don’t want to use the Wordpress image). I have a PHP image with many extensions: https://hub.docker.com/r/itsziget/php/. There is a link to the source if you want to build your own version with less extensions.

And I also found that someone created a project that works as an extension installer for any image. I dont’t remember its name.

@meyay & @rimelek: Thank you both so much for your patience with me.

Believe it or not, before I started installing Docker Desktop I did research for at least half a day - all in all more like a whole day, I guess.

I usually avoid Video tutorials on you tube, but I even watched some of those and dug through the official docs etc.

In the end I started with an empty folder in which I placed the ‘docker-compose.yaml’ from here and made some adjustments. I took back those after they did not work as expected and it simply run.
Until today I had no real notion that there is a difference between building, running, or committing an image. :person_facepalming:
The ‘thing’ that was doing its job immediately running “docker compose up -d” seemed not to need anything more - it conflicted with my local Apache, which I temporally deactivated and experimented with and researched for hours to get some basic adjustments in the yaml file; which in the end more or less were successful.

In all my research I never once was hit over the head with the use of a docker-file. I’m rather sure that I might have been mentioned here and there but not in way that I actually grasped its function or importance. Only after the first pointers here I was beginning to get the idea.

I may be slow sometimes but I’m usually not dumb and in this case I really felt that this is missing. I just now found out, that maybe the getting stated video-tutorial would have helped - but a 3 hours video, I’m just too old for that probably, and looking up things afterwards… well I’m too old-school.
It looked to me a lot like it is mostly about installing things - my bad, also the term app in that regard didn’t work for me, I want rum an (server-) environment and build my web-app in that not have the container as an app…

(I’m just telling that here, so that maybe someone can try to improve thing, be it only to link that tutorial and it’s entry points from appropriate pages of the docs and also may be do a rough transcription, so that it can be found better via google…)

Right now I would like to thank you a lot for your pointers, I will get some popcorn and make my way through those hours of not Netflix and come back if needs be afterwards. Thank you so much!

Have you tried this YAML?

I found it makes it easier to configure stuff.

@johnsimmonshypertext: Thank you a lot for the link. This sounds very interesting. By now I have it working my way, and as soon as I have worked with it a little and maybe fine-tuned it I’ll post it to GitHub and post a link here.