How to save image generated by docker compose

I have a docker-compose.yml file which builds new images as per its instruction and is working fine
now i wants to move only this images to another machine
how can i do so ?
i am able to save single image using docker save command but how can i save multiple images generated from docker-compose ? and use it another machine ?

I would suggest to setup a private image repository (Gitlab, Nexus3, Artifactory Container Registry, Harbor
) and push the images to the private image repository. Other hosts can pull the images from there.

If this is not applicable for you, then you can use a list of tags in the docker save operation:

docker save -o ubuntu.tar ubuntu:lucid ubuntu:saucy

In this case ubuntu.tar is the target file holding the saved images of ubuntu:lucid and ubuntu:saucy. You can add as many tags to the list as you want.

Though, when you import them, you can not cherry pick, which ones to import. It will always import all images.

Loading this file will build two new individual images
I want to build only single image from my docker-compose file
so that I can run it on another machine as single container
is there anyway to achieve this ?

Surely you build only once, which is conveniently a best practice in Continuous Delivery.

Services are built once and then tagged, by default as project_service . For example, composetest_db . If the Compose file specifies an image name, the image is tagged with that name

If you ran docker-compose up on your machine, it did the build.
Check the images by docker images command.
Then you can use docker tag ... to tag your image with the external name coming from external Registry, and then docker push .. it there.

Do not use docker save for what you want to achieve.

1 Like

It is some confidential code so i cant push to dockerhub
with docker save i need to save both image and run both of this image on another computer
any other way that i can combine this two images into one and run only one image on my server ?

That’s what I answered above.
Image is created by docker-compose build or behind the scenes by docker-compose up.
You just push it to your Docker Registry (not GitHub).
It’s a single image, no need for a second one.

What if i have to upload the image to a offline HPC which has no internet connection? Should I docker-compose up at my PC and save the built image?

You don’t have to run the containers just for building the images, but yes, you need to build the images on a machine with internet or download the base images that you are using in your Dockerfiles and build your images on HPC if you want to. So you need to save the images to a tar file and copy that to the offline system and load the images there.

Note that on HPC it is likely that you won’t have Docker, but a Docker-compatible alternative like Singularity. In that case you need to convert images so loading the image would not be docker load, but something like you can read in the guide from NASA

https://www.nas.nasa.gov/hecc/support/kb/converting-docker-images-to-singularity-for-use-on-pleiades_643.html

Here is the relevant documentation

Build a Container — SingularityCE User Guide 3.11 documentation

If you need to design an entire process for HPC with Singularity on it, you might want to run Singularity locally and build images directly with Singularity. You can install it on Linux, Windows and Mac too

Installing SingularityCE — SingularityCE Admin Guide 3.11 documentation