Docker Community Forums

Share and learn in the Docker community.

Docker-compose --volumes-from


#1

Hello. I’m trying to share folder between two different containers using docker-compose. In docker client for such issues I can use option --volumes-from. So let me tell you more details.

First I created container with some binary data. I saved this data inside the container in folder /home/dev/tmp, for example. I need to keep this data inside the container because it was created during building the container. In other words, when I don’t have any containers on my host machine, first I need to build them. During the build process binary data I was talking before is created. In future I don’t want to rebuild the container but want to use binary data that was produced during the first build. So I kept the data inside the first container. Let’s call it ‘builder’.

I want to mount the directory /home/dev/tmp inside my second container. Of course, I need the binary data inside my second container. How it can be done?


(Gary Forghetti) #2

Hi, I’m confused because you are using the wrong terminology.

You build a docker image with a docker image build command.
The docker image is just what it says, it is just an image. It is not running.

You then can run a docker container from the docker image by running a docker container run command.

Are you creating data inside the /home/dev/tmp folder when you are doing the build of your image (when you run a docker image build command)?

Or are you creating data inside the /home/dev/tmp folder when you run a docker container from the docker image (when you run a docker container run command)?


#3

Gary, thanks for attention to my issue. I will explain in more details.

  • I built my image and during building of the image I created the folder /home/dev/tmp and put in it my binaries that I want to use in another container (created from another image).

  • I created 2 more images. They have no attitude to the first image.

  • From 3 different images I plan to create 3 different containers.

  • The image with /home/dev/tmp will be the source for pure data container (if I use correct terminology).

  • From the second image I want to use files from pure data container. This is the main goal.


(Gary Forghetti) #4

So you will have 3 docker images and wish to run 3 containers (one from each image).

Docker image 1 (this one contains your binary data)
You build this docker image and during the build you create the /home/dev/tmp directory and put binary data in that directory.

Questions about Docker Image 1

–Is the contents of the /home/dev/tmp directory code or data?
–Does this image contain an application which will run when you run a container from this docker image?
–When you run a container from this docker image, will the binary data be changed in the the /home/dev/tmp directory by this container?
–Is this data sensitive, does it need to be encrypted, protected?
–Do you need to backup this data in case the container crashes or the docker nodes crashes?
–What happens if this container crashes or the docker node crashes, do you need to recreate the data? Or can you just start a new container from this docker image.

Questions about Docker Image 2 and Docker Image 3

When you run containers from these images and wish to access the binary data created by the container started from Docker Image 1, will these 2 containers be updating the binary data in the /home/dev/tmp directory running in the container started from *Docker Image 1, or will these 2 containers just read the binary data?

How often will you need to rebuild the Docker Image 1 to create the /home/dev/tmp directory and put new binary data in it? Several times a day? Daily? Weekly? Once in a while? Rarely? None, it’s a one time thing?
What happens to containers running Docker Image 2 and Docker Image 3 if they cannot access the binary data from the container running Docker Image 1? Can those applications “recover/retry”?

You need to consider that in the virtual world that virtual machines, containers and applications all have failures, go down, require “maintenance”, new versions, etc etc.


#5

I hope my task doesn’t need such general discussion. I just need make /home/dev/tmp folder be visible for another container. I know the right way how to process data inside it. Believe me. Just show me the way to make this folder shared. That’s all I need.


(Gary Forghetti) #6

Well without more details, here is how I would do it.

The containers have to run on the same Docker node.

Create a docker volume

🐳  gforghetti:[~] $ docker volume create my_binary_data
my_binary_data

Display the volume information

🐳  gforghetti:[~] $ docker volume ls
DRIVER              VOLUME NAME
local               my_binary_data
🐳  gforghetti:[~] $ docker volume inspect my_binary_data
[
    {
        "CreatedAt": "2019-02-11T15:12:50Z",
        "Driver": "local",
        "Labels": {},
        "Mountpoint": "/var/lib/docker/volumes/my_binary_data/_data",
        "Name": "my_binary_data",
        "Options": {},
        "Scope": "local"
    }
]

Run a container and do a bind mount to the volume and bring up a shell inside the container.

🐳  gforghetti:[~] $ docker container run -it --name my-app1 --volume my_binary_data:/home/dev/tmp alpine:latest sh
Unable to find image 'alpine:latest' locally
latest: Pulling from library/alpine
6c40cc604d8e: Pull complete
Digest: sha256:b3dbf31b77fd99d9c08f780ce6f5282aba076d70a513a8be859d8d3a4d0c92b8
Status: Downloaded newer image for alpine:latest
/ # 

Display the mounted file system on the volume.

/ # ls -la /home/dev/tmp/
total 8
drwxr-xr-x    2 root     root          4096 Feb 11 15:12 .
drwxr-xr-x    3 root     root          4096 Feb 11 15:13 ..
/ # 

Create a file and put some data in it

/ # echo -e "Some binary data\nSome more binary data" > /home/dev/tmp/my-binary-data-file
/ # 

Display the contents of the file

/ # cat /home/dev/tmp/my-binary-data-file
Some binary data
Some more binary data
/ #

Bring up another container and share the same volume.

🐳  gforghetti:[~] $ docker container run -it --name my-app2 --volume my_binary_data:/home/dev/tmp alpine:latest sh
/# 

Access the file created by the 1st container my-app1

/ # cat /home/dev/tmp/my-binary-data-file
Some binary data
Some more binary data
/ # 

Bring down both containers and remove them.

🐳  gforghetti:[~] $ docker container rm -f my-app1
my-app1
🐳  gforghetti:[~] $ docker container rm -f my-app2
my-app2
🐳  gforghetti:[~] $ docker container ls -a
CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS              PORTS               NAMES
🐳  gforghetti:[~] $

Bring up a new container and access the volume

$ docker container run -it --name my-app3 --volume my_binary_data:/home/dev/tmp alpine:latest cat /home/dev/tmp/my-binary-data-file
Some binary data
Some more binary data
/ # 

The volume persists on the Docker node until your remove it.


#7

Oh, are you joking? I can do it in docker client without any troubles. I wrote in the first message that I need to reproduce the behaviour of option –volumes-from for docker-compose.


(Gary Forghetti) #8

Sorry, here’s an example with docker-compose to do a build and run.

Dockerfile

FROM ubuntu:latest
WORKDIR /tmp

RUN echo -e "Some binary data\nSome more binary data" > /tmp/my-binary-data-file 
CMD cp /tmp/my-binary-data-file /home/dev/tmp && sleep infinity

docker-compose.yml file

version: '3.3'

volumes:
  my_binary_data:

services:
  my-app1:
    build:
      context: ./
      dockerfile: Dockerfile
    image: my-app1:latest
    volumes:
      - my_binary_data:/home/dev/tmp   

  my-app2:
    image: ubuntu:latest
    depends_on:
      - my-app1
    command: ["sh","-c","cat /home/dev/tmp/my-binary-data-file && sleep infinity"]
    volumes:
      - my_binary_data:/home/dev/tmp  

Build and run

🐳  gforghetti:[~/Downloads] $ docker-compose up -d
WARNING: The Docker Engine you're using is running in swarm mode.

Compose does not use swarm mode to deploy services to multiple nodes in a swarm. All containers will be scheduled on the current node.

To deploy your application across the swarm, use `docker stack deploy`.

Creating network "downloads_default" with the default driver
Creating volume "downloads_my_binary_data" with default driver
Creating downloads_my-app1_1 ... done
Creating downloads_my-app2_1 ... done
🐳  gforghetti:[~/Downloads] $ docker container logs downloads_my-app2_1
Some binary data
Some more binary data
🐳  gforghetti:[~/Downloads] $

#9

Tha’s very nice. Thanks a lot!