Docker Community Forums

Share and learn in the Docker community.

Run, Run, Run, Run - looking for a better method

I am new to Docker and have inherited Dockerfile that’s used to build our Django environment.

The Dockerfile is a series of (mostly), “RUN” commands

Is there a better method of combining these?

Does every, “RUN” command create a separate image?

RUN dpkg-divert --local --rename --add /sbin/initctl
RUN ln -sf /bin/true /sbin/initctl
RUN apt-get update && apt-get install -qyy curl
RUN curl -sL | bash -
RUN apt-get update && apt-get install -qyy
RUN add-apt-repository ppa:fkrull/deadsnakes
RUN apt-get update && apt-get install -qyy
RUN wget --quiet -O - | sudo apt-key add -
RUN sudo apt-get update
RUN pip install supervisor-stdout
RUN service supervisor stop
RUN service nginx stop
RUN rm /etc/nginx/sites-enabled/default && rm /usr/share/nginx/html/*
RUN apt-get update && apt-get install -qyy libjpeg8
RUN apt-get update && apt-get install -qyy libjpeg-dev libpq-dev autoconf build-essential libffi-dev
RUN apt-get update && apt-get install -qyy postgresql-9.4
RUN sed -i ‘s/#listen_addresses/listen_addresses/g’ /etc/postgresql/9.4/main/postgresql.conf
RUN service postgresql start;
RUN pip install virtualenv virtualenvwrapper
RUN virtualenv -p /usr/bin/python3.5 /venv
RUN . /venv/bin/activate; pip install --upgrade pip setuptools wheel
RUN . /venv/bin/activate; pip install pillow==2.9.0 pandas==0.17.1
RUN . /venv/bin/activate; pip install -r requirements.txt
RUN apt-get remove -qyy libjpeg-dev libpq-dev autoconf libffi-dev
RUN service postgresql start;
RUN apt-get remove -qyy postgresql-9.4 && apt-get autoremove -qyy
RUN . /venv/bin/activate; cd /app; python npm
RUN . /venv/bin/activate; cd /app; python compilescss
RUN . /venv/bin/activate; cd /app; python collectstatic --noinput
RUN apt-get remove -qyy nodejs npm && apt-get autoremove -qyy

Yes, if you run docker images -a; “layer” is a more common term.

Sure, use a single RUN command that runs a really long shell command joined by && (or maybe ;). A lot of my Dockerfiles look like

RUN dpkg-divert --local --rename --add /sbin/initctl \
 && ln -sf /bin/true /sbin/initctl \
 && ...

One thing you might try is creating the /etc/apt/sources.list file once with all of the repositories; then running apt-get update once only; then running apt-get install once only.

Another really important thing you should do is break this up into multiple containers. Glancing through your Dockerfile it looks like you’re trying to embed nginx, PostgreSQL, and your application all into a single image; those really should be three (and I’d recommend using the standard postgres:9.4 image over building your own).

To a first approximation commands like service, initctl, and systemctl just don’t work in Docker and you should figure out how to accomplish your goals without using them. (This is a combination of every RUN line and every docker run command starting over with a clean filesystem and no processes running, and systemd trying to manage too much such that it can’t run inside Docker.)

Since the “activate” script sets some environment variables and that’s it, you could combine these as

RUN . /venv/bin/activate \
 && pip install --upgrade pip setuptools wheel \
 && pip install willow==2.9.0 pandas==0.17.1 \
 && pip install -r requirements.txt

Or, you can directly run /venv/bin/pip without running the activate script, and the right thing will happen.

Or, a Docker container is a lot like a virtual environment in that it’s an isolated filesystem space, and in this context there’s no real harm (and it’s IMHO simpler) to globally pip install things into /usr/local.

Thank you for a good and thorough explanation. I’m implementing your recommendations.