Run docker app like an exectuable

Hi, I am trying to get a flask app that runs on docker to run like an executable on marathon.
That is to say, I normally start my app with docker run. And I would like to use cmd or entrypoint.

docker run -p 8080:80 -d -t -i -v $(CURDIR)/app:/opt/app --name dd-api dharmicdata/python-deploy

I tried cmd and similar variations on entry point but I keep getting a path error that those locations do not exist:

CMD cd /opt/app/ && /opt/venv/bin/python 

I am a quite new to devops, so I am hoping you guys can enlighten me. Any ideas what I might be doing wrong?

My image is as follows:

├── Makefile

ca├── app

│   ├──

│   ├──

│   ├── hroy.json

│   └── tests

│       └──

├── ops

│   ├── base

│   │   ├── Dockerfile

│   │   └── requirements.txt

│   ├── deploy

│   │   ├── Dockerfile

│   │   ├── nginx.conf

│   │   └── supervisord.conf

│   └── local

│       └── Dockerfile

├── scripts

   ├── dd-api.conf



the dockerfile:

FROM ubuntu:14.04

# keep upstart quiet
RUN dpkg-divert --local --rename --add /sbin/initctl
RUN ln -sf /bin/true /sbin/initctl

# no tty
ENV DEBIAN_FRONTEND noninteractive

# get up to date
RUN apt-get update --fix-missing

# global installs [applies to all envs!]
RUN apt-get install -y build-essential git
RUN apt-get install -y python python-dev python-setuptools
RUN apt-get install -y python-pip python-virtualenv
RUN apt-get install -y nginx supervisor

# stop supervisor service as we'll run it manually
RUN service supervisor stop

# build dependencies for postgres and image bindings
# RUN apt-get build-dep -y python-imaging python-psycopg2

# create a virtual environment and install all depsendecies from pypi
RUN virtualenv /opt/venv
ADD ./requirements.txt /opt/venv/requirements.txt
RUN /opt/venv/bin/pip install -r /opt/venv/requirements.txt

# expose port(s)

FROM dharmicdata/python-base

# start supervisor to run our wsgi server
CMD cd /opt/app/ && /opt/venv/bin/python



FROM dharmicdata/python-base

# Download latest data from s3
CMD cd /opt/scripts/ && /opt/venv/bin/python --table hroy --s3_bucket allermsql --output_dir /opt/app 

RUN pip install supervisor-stdout

# file management, everything after an ADD is uncached, so we do it as late as
# possible in the process.
ADD ./supervisord.conf /etc/supervisord.conf
ADD ./nginx.conf /etc/nginx/nginx.conf

# restart nginx to load the config
RUN service nginx stop

# start supervisor to run our wsgi server
CMD supervisord -c /etc/supervisord.conf -n


To diagnose the path problem, try launching your image with an interactive bash prompt, and then take a look around at what actually exists:

docker run -p 8080:80 -it -v $(CURDIR)/app:/opt/app --name dd-api dharmicdata/python-deploy bash
$ ls /opt/app
$ ls /opt/app/
$ ls /opt/venv/bin/python

If it is that is missing, is your volume getting mounted properly? Is the host path you specified actually on the docker daemon and does it actually contain the code?

Additionally, I noticed that your Dockerfile for deploy has two CMD instructions. The CMD Dockerfile instruction only sets metadata in the image for what command should be run at startup. Including two CMD will cause the first one to be completely ignored.



Thanks for getting back to me so quickly. I realise I have a lot to learn about dev-ops in general but specifically Docker, Marathon and Mesos.

I got some help from one of the solution architects at our bare-metal provider (who, incidentally are epic btw, can I name them?). I was doing quite a few things wrong.
Just to clarify, the issue was that although I was able to run the app on docker fine, when I tried the same image on marathon I was missing a few things in my understanding of how marathon jsons work, how it pushes the image and requirements of the app to the other nodes. Here’s how he solved it:

So take a look at what I did:

  1. added a docker registry to the list of running containers on the first host to act as your own “Dockerhub” so that the other hosts can pull the image from there. You need this because docker build simply pushes the image on the machine that the build is being executed, it does not know about the other hosts in the mesos cluster
  2. configured the app json to use that image and point to the proper /opt/app and /opt/scripts directories as required by the default entrypoint (“the CMD line”)
  3. modified the docker image so that it only has a single CMD entry (there can be only one - like in the Highlander :slight_smile: ) btw: The CMD line or the Entrypoint line in the Dockerfile gets executed at “runtime” after the docker image has been instantiated. The cmd line in the marathon app json overrides that. The arg line in marathon appends args to it.
CMD cd /opt/scripts/ && /opt/venv/bin/python --table hroy --s3_bucket allermsql --output_dir /opt/app  && supervisord -c /etc/supervisord.conf -n
  1. modified nginx and supervise to listen on instead of

So some further reading points for me in this are: docker registry, marathon json files, marathon on mesos, dockerfile and docker image building in general and specifically how to use cmd and args. And routing between the above.

So the way I understand it, firstly marathon only wants the docker image, the way the app runs should be set with the marathon json using cmd and args. And required files for the docker app, should reside on an accessible location and mount using the volume parameter.

Very glad to have this up and running now :smile: