Utterly mystified about docker file system

I’m new to Docker and I’m following the basic instructions. I can get something running following the instructions step by step, but the instant I try to do something myself everything stops working. In particular I’m utterly mystified about the relationship between the file system in a container and the file system on my local disk.

I thought that the idea of Docker was that it should be isolated, but it quite clearly isn’t. A write to files inside a running container shows up as a write to files on my local disk. This means I can’t have multiple containers for the same image as they’ll all be fighting over access to my local disk.

I’m running a basic Rails app https://docs.docker.com/compose/rails/ with the web and db in separate images and separate containers. My docker compose is

version: '3'
services:
db:
  image: postgres
  volumes:
    - ./tmp/db:/var/lib/postgresql/data
web:
  build: .
  command: bundle exec rails s -p 3000 -b '0.0.0.0'
  volumes:
    - .:/myapp
  ports:
     - "3000:3000"
  depends_on:
     - db

When I run the app it works! Great but updates to the log file in the docker_web_1 container show up as updates in my local log/development.log

I’m guessing this is because the lines in the docker-compose.yml

  volumes:
     - .:/myapp

Are sharing the directory in the container with my local directory. Why?

What I want is a set of containers, each with their own IP address and isolated file systems and databases, that can communicate with each other. What’s the best way to do this?

You probably found a tutorial that’s optimized for rapid development over, well, using Docker effectively.

Docker has a very good tutorial on building and running custom images. The important part of this is that you COPY your application into the image in a Dockerfile, and don’t then tell Docker to ignore the contents of the image in favor of what’s on your local disk (delete the two lines in the docker-compose.yml that you quoted).

There are a couple good uses of Docker volumes or mounting host directories into containers. One is arranging persistent storage for a container (as you have for PostgreSQL): containers inevitably get deleted, and so you need somewhere outside the container to store their actual data. A second is getting log files out (in your example you might mount something on /myapp/log). A third is pushing config files in. The code itself really should live in the image, and not get mounted from the host. In all of these cases, if you have multiple copies of the container, it’s up to you to arrange non-overlapping storage.

A development sequence I’ve found works well is to:

… install a working local development environment
… write code
… run rspec and similar tests
… run manual tests
… only then run docker build and build an image containing my code

This is almost out-of-the-box functionality. Containers don’t really have their own IP addresses (they do but using them at all is asking for trouble); you generally publish a service by telling Docker to relay some TCP port on the host to the container (the ports: ['3000:3000'] in the docker-compose.yml file).

In this example, if you built an image of your application and changed docker-compose.yml to have image: myrailsapp instead of build: . and to not have volumes: ['./myapp'], then you could copy it into a new empty directory, change the host port number of the web container (ports: ['3001:3000']), and launch a new separate self-contained copy of this stack. The database storage would be ultimately on the host in this new directory.

1 Like

(…and I’m only now clicking on the link and discovering it’s official Docker documentation. That’s a little disappointing: I really do feel like bind-mounting the actual application into a container should be an anti-pattern.)

1 Like

Only just got back to this. Thanks for the detailed reply. I’ve got a better understanding now. Still learning about how volumes work though.