Docker Community Forums

Share and learn in the Docker community.

Templates to create host folder

I’m trying to create a host directory per service / slot on Docker 18.09.2 / Windows 10 1903 64 bit Professional.

I have a drive shared, and if I specify an existing directory, everything works as expected. I can both read and write from my containers to the host folders.

However, I need to run multiple copies of the same service, and I need to keep their logs separate. I was hoping that the following would work in the volumes tag.

    - "j:/logs/{{.Service.Name}}{{.Task.Slot}}/:/usr/local/tomcat/logs"

The logs directory exists, but not the underlying directory that I’m trying to create. However, when I run docker stack deploy, I get the following error:

failed to create service app_calcs: Error response from daemon: Mount denied:
The source path "J:/logs/{{.Service.Name}}{{.Task.Slot}}"
doesn't exist and is not known to Docker

I have full permissions for all users / system users / administrators on this directory. I have unshared and reshared the J: drive. I have reset my credentials. I have tried running this from an Administrator account.

Nothing seems to work.

Is it possible to create per slot host directories when deploying a stack? If so, how?

Swarm template variables can not be used there.

Though, if you create a named volume, you can use template variables for the source element. It is a pitty that they can’t be used on the volume-name itself. Using them on the source will result in named volumes being created when a container is scheduled the first time.

Of course you could create a named volume on each node pointing to a different folder.

So if I understand your answer correctly, I need to create a named volume at the top of my composer file, and during launch Docker will create a new source folder per service slot?

That should work.

Creating multiple services, each with the same image and a pre-created host directory seems a bit of a kludge, but I suppose I could do that.

This is just a testing environment to see if people wrote their Java web applications properly (ie., serializable session variables among other things).

Actualy, I am not sure if they are created for you… I used that method on a three node etcd3 a cluster and it worked like a charm.

Update: a named volume is nothing more then a handle on a folder or remote share. The handle is created, not the folder or remote share itself - those have to exist before the handle for the named volume is created. Update-End

The best approach is actualy to have a single named volumed for all Instances: e.g. myvolume, which will be mappen on /data/ (or anywhere else). Then you introduce a new environment variable called ID and assign the value {{Task.ID}} to it. You entrypoint script needs to modify your application configuration to use /data/$ID as its datafolder. The advantage is: regardless where a Task.ID is scheduled, the instance will allways access the correct datafolder, even though the volume mapping will be identical for all the replicas.

Does that make any sense?

That makes sense, although if the directories are not created it is of little utility.

It does make the compose file a bit cleaner in that I don’t have to write specific services for each container. I still have to create the directories though.

This looks to be non-trivial for Tomcat, which is the servlet container that I have to use.

If you are willing to do some minor rewrites, the solutions wouldn’t be that hard to implement:

As far as I remember Tomcat should have a lifecycle during start and allows to get/set application scoped variables. You could add a handler that reads the OS environment variable with System.getenv(“ID”) and make it available as a application scoped variable. Then just reuse the application scoped variable in your code. It has been a long time since I had my fingers on tomcat, maybee the scope was called differently…

After a little bit of hacking on, server.xml, and, I got things to work.

My major issue was getting the environment variables to work in docker-compose.yml. I had some extraneous double quotes that were messing things up.

I now have directories in my logging directory for each service container - named ServiceName_TaskSlot.

This will let me see if there are any serialization issues with the applications as I exercise them.

Thanks for the pointers. If you think it’s of any value, I can post the relevant parts of the code.

Thank you for the offer.

Actualy, in our projects the logging is always configured to output logs to STDOUT and STDERR.
We simply delegate the whole log output to docker, which then can be configured to forward the logs to a log management system of choice (which usualy ends up beeing a flavor of ELK)

My suggestion to change the code was aimed towards a scenario where you application does write files on the filesystem in general. If it is just about logging the docker standard way to output logs to STDOUT/STDERR is the better solution.