I have been trying to install drupal using the official image from docker hub. I created a new folder in my D directory, for my Drupal project and created a docker-compose.yml file in it.
Docker-compose.yml from Docker hub
# Drupal with PostgreSQL
#
# Access via "http://localhost:8080"
# (or "http://$(docker-machine ip):8080" if using docker-machine)
#
# During initial Drupal setup,
# Database type: PostgreSQL
# Database name: postgres
# Database username: postgres
# Database password: example
# ADVANCED OPTIONS; Database host: postgres
version: '3.1'
services:
drupal:
image: drupal:8-apache
ports:
- 8080:80
volumes:
- /var/www/html/modules
- /var/www/html/profiles
- /var/www/html/themes
# this takes advantage of the feature in Docker that a new anonymous
# volume (which is what we're creating here) will be initialized with the
# existing content of the image at the same location
- /var/www/html/sites
restart: always
postgres:
image: postgres:10
environment:
POSTGRES_PASSWORD: example
restart: always
When I ran the docker-compose up -d command in a terminal from within the folder which contained docker-compose.yml file, my drupal container and its databse were successfully installed, running and I was able to access the site from http://localhost:8080 but I couldnt find their core files and codes in the folder which contained the docker-compose.yml file. It was just docker-compose.yml file in the folder.
I then removed the whole docker container and began with a fresh installation again with by editing the volume section in the docker-compose.yml file to point to the directory and folder where I want the core files and codes of drupal to be populated.
Example D:/Projects/Drupalsite.
# Drupal with PostgreSQL
#
# Access via "http://localhost:8080"
# (or "http://$(docker-machine ip):8080" if using docker-machine)
#
# During initial Drupal setup,
# Database type: PostgreSQL
# Database name: postgres
# Database username: postgres
# Database password: example
# ADVANCED OPTIONS; Database host: postgres
version: '3.1'
services:
drupal:
image: drupal:latest
ports:
- 8080:80
volumes:
- d:\projects\drupalsite/var/www/html/modules
- d:\projects\drupalsite/var/www/html/profiles
- d:\projects\drupal/var/www/html/themes
# this takes advantage of the feature in Docker that a new anonymous
# volume (which is what we're creating here) will be initialized with the
# existing content of the image at the same location
- d:\projects\drupalsite/var/www/html/sites
restart: always
postgres:
image: postgres:10
environment:
POSTGRES_PASSWORD: example
restart: always
When I ran the docker-compose.yml command I received the error as shown below.
Container drupalsite_postgres_1 Created 3.2s
- Container drupalsite_drupal_1 Creating 3.2s
Error response from daemon: invalid mount config for type "volume": invalid mount path: 'z:/projects/drupalsite/var/www/html/sites' mount path must be absolute
> PS Z:\Projects\Bixeltek>
Hello @avbentem my apologies. This issue is same as my previous post. I deleted that thread since I felt that the forum where I posted it wasnt appropriate.
Secondly I have fixed my post above to make it readable for everyone.
Correct, volumes are not meant to sync content from a container back to your local file system. (Even though Docker may store them somewhere on your local file system.) Seeing the following attempt below, it seems you want to share a folder on your local file system with the container?
However, volumes are fully managed by Docker (you cannot provide a folder on your local file system for volumes), and are not typically changed from your local file system directly. To sync with a specific local folder, if that’s indeed what you want, you need bind mounts instead. In the above, all that is missing to use bind mounts, is the colon. Like:
- d:\projects\drupalsite:/var/www/html/sites
If that’s what you want, then it still does not make sense to bind mount the same folder d:\projects\drupalsite to multiple locations in the container, like /var/www/html/modules, /var/www/html/profiles and /var/www/html/themes. (Also, one of your lines uses d:\projects\drupal while the other 3 use d:\projects\drupalsite.)
So, I guess you only want:
volumes:
# Share a local folder with the container, so that changes
# to the local d:\projects\drupalsite and the container's
# /var/www/html/sites are always synced, using a bind mount:
- d:\projects\drupalsite:/var/www/html/sites
With the above, only the container’s /var/www/html/sites is synced with your local file system; the other folders such as /var/www/html/modules, /var/www/html/profiles and /var/www/html/themes are taken from whatever the original image provided.
To expand the insight:
– docker volumes (!= bind-mounts) can copy existing content from the target folder into the volume, the first time it is used. This is true unless specifcly configured to not copy prexisting content.
– it is possible to create volume baked by bind-mount that actualy behave like volumes (requires a whole different configuration than the bind-mounts you defined).
I feel It is generaly not a good idea to rely on the copy mechanism, as the outcome might be unpredictable on future updates of the image, when new mandatory files are added that won’t be available in an existing volume (with an older copy of the folder).
@avbentem as someone beeing from germany, I realy dig the use of “geheimtip”.
Also, my “I guess you only want” is not true as, when using Drupal as a CMS, it may also change contents of the other folders in /var/www/html/? If true, then to preserve those changes you’ll indeed also want Docker-managed volumes for those (or a bind mount for all of /var/www/html), to persist those changes (and to create backups). I’ve no idea where Drupal stores all its data.
Aside, I was once corrected by someone else from Germany, stating it’s “geheimtipp” with two Ps since some spelling change many years ago? (Even though many on the interwebs use a single P indeed.)
Your comment above the volume declaration does not match the type of volume declared: it is a bind-mount and as such this behavior does not apply to it.
On Linux you would determin the mountpoint of a (real!) volume with:
Are you saying one could write to a Docker-managed volume directly from the host? The documentation says that’s a no, but I’ve never tested:
Volumes are stored in a part of the host filesystem which is managed by Docker ( /var/lib/docker/volumes/ on Linux). Non-Docker processes should not modify this part of the filesystem. Volumes are the best way to persist data in Docker.
Had my first coffe after the post so here we go with thinking it thru
For read operations it shouldn’t be a problem at all.
For write operations it shouldn’t be a problem as well, as for anonymous/named volume the mountpoint /var/lib/docker/volumes/${volumename}/_data is the folder where the content is stored. (irrelevant for this case, but good to know: for named volumes baked by a remote share this folder is the location the remote share is mounted in),
I don’t see a real reason to not mess arround with it’s content. Though, its absolutly true to not mess arround with the files inside its parent folders.
But there is one thing I am not 100% clear with: if the behavior is still true when accessing the WSL$ share. I don’t see any obvious red flags here - but who knows,
Initially I tried volume, if you see my original post. It didnt populate the container with the core files and codes of the image on my local system. Now I tried bind mount and the result is same.
And that is to be expected: volumes are managed by Docker; you cannot tell Docker where to store the volume. (Even more, even if you could tell Docker where to store the volume, Docker would still not consider “the folder which contained the docker-compose.yml file” to be empty anyhow, as it would already hold that very docker-compose.yml file that you mentioned.) The only thing Docker could do for empty volumes, is copy data from the image into the volume. That does not involve a folder of your own choice on your local file system.
Also to be expected, as for bind mounts Docker is not going to copy anything at all, even when the target folder on your local file system is empty.
That said: does using the volume like @meyay explained (so, from the folder that Docker manages for that volume) work for you?
Alternatively, use the docker cp command (or the docker container cp command) to copy the original content from the container into your local project folder once, and after that use a bind mount to use that project folder instead of the original folder in the container.