I would like to know how you get data (primarily source code) into your containers. From what I saw on different images there seem to be mainly three different approaches:
Using RUN git clone ... in a Dockerfile and build the image each time the source code changes.
Get the source code to the host and use COPY . /whatever in the Dockerfile.
Get the source code to the host and use docker run -v $(pwd):/whatever/
Does any of these alternatives have severe drawbacks or problems? Some authors are suggesting that git clone in a Dockerfile does not always behave like expected (getting the latest version of the source code).
I would like to use the method for my development and build containers. Thus I like to have a solution working on my local machine, but also in combination with continuous integration platforms like Travis and others.
I haven’t been using Docker for long and cannot really give credible advice but I do believe that there isn’t one size that fits all. It really depends on the use case.
In my only image I published so far only the 3rd option was feasible because the user is expected to make (uncommitted) changes to the Git repository before she starts the Docker container that uses the mounted repository.
If you don’t need to modify the code in the repository I’d go with 1 because it’s more convenient than 2. I don’t really see a use case for 2.
In our case we use COPY method for production images, cause our goal is to deliver working application without any other applications like Git. I want to run docker-compose and it should be all there my application container with all builded assets and vendors, my server container and database container.
On the other side we mount our project directories during development process cause it’s easier and much faster for us to work with code.
We never use git clone, beacuse using it you are not able to pull specific commit with easy way in CI and you are not interested in latest commit, cause during build on CI new commit can appear.
Hi @zmijewski
I m trying to start to use docker in my development process up to production.
Could u pls elaborate more about your development process? Mounting the project directory seems to be great option but I m not sure what to do exactly. I want to have my dev environment (for example tomcat server inside a container) with:
To use docker in development environment we create Dockerfiles usualy based on debian or alpine distributions casue they have the lightest weight. We install required packages in your case that would be tomcat and java probably and so on. For more info check this place http://docs.docker.com/engine/articles/dockerfile_best-practices
Then we use docker-compose to run containers with volumes pointing our project directory. Check more here https://docs.docker.com/compose/
I do not work with Java so I have no clue how to debug Java applications. Probably you are doing it through some kind of IDE like NetBeans, Eclipse or InteliJ. To do so with docker it is impossible it unless you install java on your host. In ruby we debug with byebug and we do it in container without any effort at all.
As far as CI is concern there are few approaches to run tests in that environment. You can use Jenkins to build semi production image with Dockerfile, then run tests and finaly make fully production image with downloaded vendors and whole ur application.
In perfect world you should run proccess per container to ease scaling. So for one simple web application you probably want to have one container with nginix, one with web application and all vendors and one with database and probably one with persistent data storage for database
If you’re currently using docker hub than yes, that is the case. Please do remember that docker hub offers only one private repository for free. If you have multiple private projects, you’ll have to pay for docker hub. There are also alternatives to docker hub. None of which i’ve tried though.
Containers are ephemeral. Meaning that the lifetime of a container is short. You should be able to just destroy a container, and run a new one that will behave exactly the same easily with minimal configuration.
I think you should read up on how docker uses “copy on write” to save space. In a nutshell it means that if you run multiple containers from the same image they all share the same files. Files are only copied if a change is made to a file. So running 100 containers or 1 doesn’t increase space used on your file system. It’s a really interesting concept.
I also use docker containers as jenkins slaves for building and testing code.
I’m curious though about what you mean when you say that for a web application you have a separate nginx and web application container? You mean that you use NGINX as a reverse proxy to your web application container? Because both should run a web server making the NGINX container optional depending on requirements.
What if we want go pull code from a private repo and build this code at go build time .
To goal this,i have do some tries. 1.DoRUN ADD /root/.ssh/id_rsa /root/.ssh/id_rsa
2.Use shell script in Docker file,the script do-ssh.sh is:
I have aready read this issue:https://github.com/docker/docker/issues/6396
It seem that there have’t any way to do pull from a private repo successfully when docker build
Must i compile my code at docker run time?
I’m more than confused by what I’m looking at. I don’t recognize any of that code. What language is that? It doesn’t appear to be linux shell / bash commands… Is that a docker-compose file? What type of file is that, and how do you run it? Is this Java? I’m way confused…
The system talks about “Part two”. I’m sort of old fashioned. When I see part two, I immediately think there must be a part one somewhere. What is Part one, where is it, and why wasn’t it included?
No offense intended, but I’m totally lost. This may be an awesome technique, but without appropriate context the example is more than confusing.
Many thanks for any clarification you can offer,
Zip.
Hi, Thanks for sharing the thoughts.
In my case, I have a container running my application. The application consumes configuration file. The configuration file must be generic and may be updated based on the requirements. So, we have the configuration in the git repository and any modifications to that is being pushed to code / git repository. My goal is to make the container consume whatever the latest configuration available in the git (may be after a restart of container). 'm thinking to create a Dockerfile with the following contents
FROM ubuntu
ADD .ssh /root/.ssh // For the private and public key
RUN mkdir /usr/… // For copying the known_hosts file
ADD known_hosts /usr/… // Copy the known_hosts file.
RUN mkdir /opt/myservice/code_to_be_synced
RUN cd /opt/myservice/code_to_be_synced
RUN git clone username@gitrepo // This is to clone the code base
ADD entry.sh /etc/
ENTRYPOINT /etc/entry.sh
entry.sh will have the lines of code to pull the latest changes from git repository (already cloned) and replace the existing configuration file with the latest available in Git repository.