Expected behavior
My local machine:
docker save -o myimg.tar img:tag
On AWS EC2 instance running Ubuntu 18.04:
docker load -i myimg.tar
Loaded image: myimg:tag
Actual behavior
My local machine:
docker save -o myimg.tar img:tag
On AWS EC2 instance running Ubuntu 18.04:
docker load -i myimg.tar
19eb0038b177: Loading layer [========================>] 199.6MB/199.6MB
9ccc7116172f: Loading layer [========================>] 207.6MB/207.6MB
Error processing tar file(exit status 1): not a directory
Additional Information
I’m trying to copy over an existing Docker image running on my local machine to an AWS Free-Tier EC2 instance running Ubuntu 18.04. I’m using “docker save” to save the image as a TAR file and scp-ing it over to the EC2 instance.
“docker load” fails consistently for every image I try to copy over using “load”. This is not specific to one particular image/container. It happens with any image built for micro services running within my application. It does not happen with other standard applications I pulled from public registries, such as HelloWorld, Nginx, ElasticSearch etc.
I can “docker load” the image from the generated TAR on my local machine itself without any issues. That got me thinking about whether there are any limits docker is running into on my AWS Free-Tier instance which is running on 1GiB memory and 26GiB disk space (the disk space is not an issue).
Is there a way to debug what happens internally when I run “docker load” ? “Error processing tar file(exit status 1): not a directory” doesn’t seem to be specific enough.
file -b myimg.tar
POSIX tar archive
ls -l myimg.tar
-rwxr-xr-x 1 ubuntu ubuntu 791446016 Nov 19 22:11 myimg.tar
Steps to reproduce the behavior
I don’t think the behavior can be reproduced on any machine other than mine. The sequence is pretty straightforward though:
- On local machine, “docker save -o myimg.tar image:tag”
- On AWS EC2 instance, “docker load -I myimg.tar”
Appreciate if anybody can help!