What are some best practice for managing docker containers and deployment?

I have a concern for how to move over from deployment to production.

== Overview ==

I am currently using meanjs/mean image for my container and linking it with mongo.

I had my local computer create the mean stack boilerplate stuff and npm install on the local machine.

I run it as so docker run -it --link my_mongo:db_1 -p 3000:3000 -v $PWD:/opt/mean.js npm start.

This works great locally.

== Issue ==

What is the best way to get the containerized project to a server? I have a couple of ways, but which is best or some pros/cons of each.

  • Use git to version the source code and all docker stuff. Pull from server from github/bitbucket.com. Use docker-compose to construct it all for you.

  • Build an entirely new image containing the source files. Pull from server from hub.docker.com

  • In the Dockerfile have it pull from github/bitbucket the most recent source file for the app.

For starters, use docker network and not --link.

I’m not sure what your question is – usually folks will bake an image intended for deployment locally or in CI, optionally tagged with something like git SHA, e.g. nathanleclaire/app:aef234asdf, docker push it to Docker Hub, and then docker run that image on their server / higher environment. Docker Compose is a good tool and you might want to take a look at the new docker service changes in 1.12 too, they’re meant to help with orchestration.

Oh, so would you normally create a new image per update? For example, you adding in new features or fixing bugs, would you generate a new image again and then just push to server?

Is it the norm for docker folks to have an image per different iteration of their app? I was under the impression have an image per app and then have docker somehow update the app with git pull some how.

Absolutely. Our continuous integration system builds a new suite of Docker images for our combined application on every git commit.