Docker Community Forums

Share and learn in the Docker community.

Using debian image from one machine on another

(Larrymartell59) #1

I have one machine that has a debain image and I want to use it on another machine. I copied it over but when I build my app’s image on the target machine it did not use the existing image and instead rebuilt that. Is there a way I can force it use my debian image instead of building a new one?

The reason I need to do this the image I have has older libs in it and when I build a new one there are incompatibilities that are causing a few things not to work. I tried getting the older libs directly when I build my image but they no longer seem to be available.

(Metin Y.) #2

Ever though about operating your own private registry?

(Larrymartell59) #3

That is exactly what I did to transfer the image from the first machine to the second. I was not asking how to transfer the image. I was asking how do I get docker build to use the image. In my docker file I have FROM debian:buster and I have an image called debian buster, but it doesn’t use it as is - it loads new packages and libs. If I removed my apt-get commands from the docker file would that cause it use the existing image as is?

(Metin Y.) #4

appologies! Because of the lack of docker terminology, specific docker commands or a copy of your Dockerfile in your original post (and also your 2nd), it was the best guess I could do.

(Larrymartell59) #5

Sorry for not being clear. Let’s look at a specific example. If I remove the debian image and build my container I see messages like this:

Sending build context to Docker daemon 1.042GB
Step 1/28 : FROM debian:buster
buster: Pulling from library/debian
53d9d89325e4: Pull complete
Digest: sha256:9646b0ee6d68448e09cdee7ac8deb336e519113e5717ec0856d38ca813912930
Status: Downloaded newer image for debian:buster
Step 5/28 : RUN (apt-get update -y && DEBIAN_FRONTEND=noninteractive apt-get install -y --force-yes build-essential git python python-dev python-setuptools nginx sqlite3 supervisor default-mysql-server default-libmysqlclient-dev vim cron unzip software-properties-common python2.7 openjdk-8-jre-headless ca-certificates-java openjdk-8-jre xvfb wkhtmltopdf sendmail-bin sendmail r-cran-ggplot2 r-cran-caret net-tools traceroute nmap tcpdump)
—> Running in 3f291089199d
Get:1 buster InRelease [159 kB]
Get:3 buster InRelease [159 kB]
Get:4 buster/updates InRelease [38.3 kB]
Get:2 buster-updates InRelease [47.6 kB]
Get:5 buster/main amd64 Packages [7978 kB]

100’s and 100’s of lines of gets, Unpacking, Setting up, adding, collecting, etc. Build takes a long time.

If I run it a second time I get none of that - I get a lot of Using cache messages, and nothing is download, and the build is super fast.

Now, if I copy the debian image from another machine and run the build, it does not pull from library/debian, but that is the only different step - it does not use the cache, it still has 100’s and 100’s of lines of gets, Unpacking, Setting up, adding, collecting, etc. Build takes a long time.

What want is, if I copy the debian image from another machine and build the container it would use that as is and not update, the same way as what happens if I build and then build again.

I hope that is clearer.

(Metin Y.) #6

Your objective is not just to copy the resulting image from machine 1 to machine 2.
instead you want to copy the layers of the build cache itself to the 2nd machine.

Honestly, that’s a tough one!

I don’t even know in which subfolder of the docker data-root (see docker info) the build cache is located, Lets assume for a moment that it makes more sense to copy the build cache, instead of the resulting image (which could be reused as a base image in a different Dockerfile): you would need to identify and copy the originaly used base image by its checksum, the build cache itself and have an identical graphdriver on both machines. Even though if this approach succeeds, there is not guaranty that the docker engine on the 2nd host is picking up the pieces and putting them properly together. I am afraid this approach is high likey to mess up the metadata of the 2nd instance instead. If you can afford to whipe your Docker data-root if an attempt failed, you could try until you succeed… But if you don’t have the luxury to start with a scratch system, I would definitly not even try it a single time.

If i were in your shoes, I would create my own tag of the target image on machine 1 and actualy save it and load it on the 2nd instance. Since you already have an image with the final steps included beeing ready to be used as a base image, you could easily create another Dockerfile that uses this particualar image to do whatever you feel is missing.

Offtopic update: also be carefull with java8 in the container, as the default behavior is to read the cpu and ram information from the kernel directly and not from the cgroup settings for the container. If you start restricting the CPU/RAM for the container, you might encounter surprises like degraded performance or oom-kills. Java9 support cgroups limits, which are backported to java8 update 132. Though they needs to be enable via JAVA_OPTS specificly!

(Larrymartell59) #7

Can you tell me (or point me at the docs) how to create the tag and then load from that tag?

I know how to tag an image on docker build but in this case I want to tag an existing image. And will that include the contents of the cache?

(Metin Y.) #8

Re-tagging an image tag won’t magicaly include the build cache in the image, nor will the saved image include the build cache, nor is there any buildin docker command that serves that purpose. My suggestion is to stop requiring the build cache!

“Load from a tag” is not a thing. You can load a previously saved image, which includes the tag information. This is what you already did.

(Larrymartell59) #9

Guess I am missing something. How do I stop using the build cache? How do get a complete and usable Debian image from one machine and use it on another? That is my goal, and I appreciate all you help but I still don’t see how to do that.

(Metin Y.) #10

Even though you keep on repeating it, but “How do get a complete and usable Debian image from one machine and use it on another?”, is not your objective.

I hope someone else will pitch in and find out what your primary objective is and why you think this unuasal approach of yours would be the right one. Good luck!

(Larrymartell59) #11

Admittedly it’s a kludge, but I am stuck trying to solve an issue. The underlying issue is not really docker related so I wanted to keep my posts here on topic, but since you asked:

I have a docker container on machine A that I run from an image I built a long time ago. It works fine. Then the other day I wanted to spin up the same container on machine B. I took the Dockerfile to machine B and tried to build the image there.

When it got to MySQL-python it failed with the error I found detailed here: The bug in is mariadb 10.3. From reading that it appears to be fixed, but in the version of mariadb I get it’s not. The old container image has mariadb 10.1, which does not have that bug. The version of mariadb I get in the new container is 10.3. What I really need to do is either get 10.1 or get the fixed 10.3. I have tried many many things and asked in many places, with no joy. So my next though was, the image of debian on machine A has what I need, why can’t I copy that to machine B and build my app’s image using that? Hence my question here. If someone has an answer for my underlying issue that would be even better.

(Metin Y.) #12

By definition a rebuild of a Dockefile does NOT result in the same image. A single updated os package will lead at best to one or more updated depencendies beeing installed in the rebuild image.

Please provide the name of the image you use for docker create or docker run to create a container from that image. This is the image:tag you want to save and import on the other host.

(Larrymartell59) #13

Right that is exactly my problem. So I want to build my app’s image on machine B from the existing debian image I have on machine A that has the working packages. I cannot use the app’s image from machine A on machine B as it has some files and setting specific to machine A.

(Metin Y.) #14

Are you sure you have hardcoded settings from machine A in your IMAGE and not just in the container you created from it?

(Larrymartell59) #15

Yes I am sure. Unfortunately the Dockerfile copies in some site specific files into the image.

(Metin Y.) #16

Now we know:

  • you did build an image containing machine specific settings long time ago, whenever you rebuild it, it succeeds because the build cache makes the build skip steps that would break your containerized application.

  • Why rebuilding it on a system without the build cache fail

  • That your image is tainted with machine specific settings and MUST be rebuild in order to be usable on a different system.

  • why it does not make sense to simply save/load the tainted image

  • why you depend on the build cache for this particular image build, as os packages moved along and as a result break your containerized application

Honestly, the situation is messy!

There is no clean solution for your problem, as the image itself does not seem to follow the concept of seperation of concerns (Database, severall programming languages and a webservice in a single image don’t help to make an image maintainable) .

Either find a better image for your application on github or start decomposing the dockerfile. Try to reuse as many official images as possible for different tasks.

(Larrymartell59) #17

Even if I decomposed the Dockerfile it would not help. The problem of not being about to build a Debian buster image with mariadb and MySQL-python would still exist.

(Metin Y.) #18

Well, what to respond to that?

So you basicly say: decomposition doesn’t make sense because not a single docker image on dockerhub exists that is the excat version that I need.

(Larrymartell59) #19

May have implied that but that was not what I meant. This client does not have the time or money to spend decomposing the dockerfile. Anyway what I ended up doing was removing the hardcoded site specific things from the dockerfile and adding them to the run script. I rebuilt the image on the working machine, copied it to the second, and it’s working there. Thanks much for all your replies.

(Metin Y.) #20

I am glad that you found a solution :slight_smile: