Pulling Docker images: i/o timeout

We try to pull Docker images from Docker Hub. This worked fine for some time. Now, we always get an error, for example when pulling the “busybox” image:

Get https://index.docker.io/v1/repositories/library/busybox/images: read tcp 162.242.195.84:443: i/o timeout

Can anyone tell us what the issue might be and what we can do to fix the problem?

Thanks in advance
Jens

same issue here since 3 days :frowning:

1 Like

We had this today with a fresh CoreOS 522.5.0 cluster (Docker 1.3.3 build 54d900a) on Amazon’s EC2. I am very unsure where the actual problem was. However it went away by reinstalling the cluster using the latest 522.6.0 AMIs.

As a side note, manually upgrading the “broken” instances from 522.5.0 to 522.6.0 did not fix the issue.

It works now. However, we did not change anything on our side.

Has anyone figured out what is causing this?

I’ve seen several posts on Google group from people claiming that rebooting their rooters solved the problem. This is not an option for us…

You can start and stop your docker host box to workaround also. For example

boot2docker stop
boot2docker start

fixed it for me.

We are having the same issue when pushing to docker hub. It seems to be irradic and erred out initially mid-push of one of the images. Subsequent push attempts error out immediately. Over the weekend I was able to get one of them to upload after many attempts (lucky timing somehow?)

FATA[0025] Put https://index.docker.io/v1/repositories/thecompany/therepo/: dial tcp: lookup index.docker.io on 192.168.0.1:53: read udp 192.168.0.1:53: i/o timeout

Restarting boot2docker and even rebooting the machine seem to make no difference.

I’m not sure if this is the case but it seems that the problem appears when running boot2docker start before $(boot2docker shellinit). It was fixed for me after adding the shellinit to the .bashrc file.

1 Like

I received this error a few times today 20/06 aprox 01:00 UTC
Docker on Ubuntu 14.04 Baremetal
Using devicemapper storage driver

Restarting docker had no real effect, beyond taking time
Ii was a case of just trying again until successful
The problems seems more infrastructure related, especially considering the RFC1918 address in the error. The error I received referred to a 10.x address (not on my lan/vm/containers etc)

See http://stackoverflow.com/questions/26861390/docker-run-connection-timeout

I was not using a HTTP(s) Proxy

I had the same issue,

boot2docker stop
boot2docker delete
boot2docker init
boot2docker up 

Solved issue for me. This delete vm image used by boot2docker image re initialized it.

I just had the same issue and for me (on OSX), restarting the docker VM cured the problem. I suspect it was because I started the docker VM (boot2docker) at work yesterday and am now at home. I guess it caches some network config, perhaps proxies (we use HTTP/S proxies at work but i don’t have them here).

1 Like

Happened to me too. Followed your approach (docker-machine restart vm), and it worked, too. Wonder what’s causing that…

1 Like

Faced the same issue:
Network timed out while trying to connect to https://index.docker.io/v1/repositories/sitespeedio/graphite/images. You may want to check your internet connection or if you are behind a proxy.

Tried this RESOLUTION on terminal and it worked for me:
docker-machine stop machine_name
docker-machine start machine_name

I had the same issue, this was caused by an outdated version of the guest additions. to upgrade your machines just run:
docker-machine upgrade [machine-name]

Works for me. Thanks!

I think this is actually an issue with DNS resolution when using VM’s. If anyone stumbles across this thread, head here for options.

There is also a note on DNS stuff in the docker-engine install instructions, which you obviously won’t need if you’re using docker-machine. But it’s worth while to read and take a look.

Similar issue is facing since yesterday ,even reinstalling docker terminal not making any difference. Also not able to download any image from dockerhub through kitematic as well as through Dockerfile.

Step 1 : FROM java:8
Pulling repository docker.io/library/java
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 01:22 min
[INFO] Finished at: 2016-07-27T15:07:25+05:30
[INFO] Final Memory: 40M/482M
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal com.spotify:docker-maven-plugin:0.2.3:build (defa
ult-cli) on project eureka: Exception caught: Get https://registry-1.docker.io/v1/repositories/library/java/tags/8: dial tcp 52.3.178.145:443: i/o timeout -> [H
elp 1]
org.apache.maven.lifecycle.LifecycleExecutionException: Failed to execute goal c
om.spotify:docker-maven-plugin:0.2.3:build (default-cli) on project eureka: Exce
ption caught
at org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor
.java:216

It working now. However, we did not change anything on our side. I guess some maintenance activity was going on aws side as it was failing on below,

Failed to execute goal com.spotify:docker-maven-plugin:0.2.3:build (defa
ult-cli) on project eureka: Exception caught: Get https://registry-1.docker.io/v1/repositories/library/java/tags/8: dial tcp 52.3.178.145:443: i/o timeout ->