Docker Community Forums

Share and learn in the Docker community.

Docker pull results in "Request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)"

Issue type

– error
When trying to pull docker containers from I am confronted with this error:

Using default tag: latest
Error response from daemon: Get net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)

OS version

Ubuntu 18.04 (bionic)

Docker version

Version: 18.09.4
API version: 1.39
Go version: go1.10.8
Git commit: d14af54266
Built: Wed Mar 27 18:35:44 2019
OS/Arch: linux/amd64
Experimental: false
Server: Docker Engine - Community
Version: 18.09.4
API version: 1.39 (minimum version 1.12)
Go version: go1.10.8
Git commit: d14af54
Built: Wed Mar 27 18:01:48 2019
OS/Arch: linux/amd64
Experimental: false

Steps to reproduce

Docker pull hello-world

I am not behind a proxy or vpn (which seems to be a problem in other posts I’ve seen about this error)

I have attempted to search for a fix but to no avail. service docker restart does not resolve the issue. I have found that adding “nameserver” to etc/resolv.conf temporarily resolves the issue, but resolv.conf is rewritten/updated to the old version automatically, so this is not an ideal solution. I have been able to pull docker images from docker hub on this computer and on my current network in the past, so presumably some setting was changed, causing this issue. Any help would be much appreciated! Thanks very much.


If it is relevant, here is output from docker info

Containers: 2
Running: 0
Paused: 0
Stopped: 2
Images: 9
Server Version: 18.09.4
Storage Driver: overlay2
Backing Filesystem: extfs
Supports d_type: true
Native Overlay Diff: true
Logging Driver: json-file
Cgroup Driver: cgroupfs
Volume: local
Network: bridge host macvlan null overlay
Log: awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog
Swarm: inactive
Runtimes: runc
Default Runtime: runc
Init Binary: docker-init
containerd version: bb71b10fd8f58240ca47fbb579b9d1028eea7c84
runc version: 2b18fe1d885ee5083ef9f0838fee39b62d653e30
init version: fec3683
Security Options:
Profile: default
Kernel Version: 4.15.0-1035-oem
Operating System: Ubuntu 18.04.2 LTS
OSType: linux
Architecture: x86_64
CPUs: 12
Total Memory: 31.05GiB
Name: ###(redacted)###
ID: ###(redacted)###
Docker Root Dir: /var/lib/docker
Debug Mode (client): false
Debug Mode (server): false
Experimental: false
Insecure Registries:
Live Restore Enabled: false
Product License: Community Engine
WARNING: No swap limit support

Same problem here using docker login on the shell:

> docker login
Authenticating with existing credentials...
Login did not succeed, error: Error response from daemon: Get net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)

I have exactly the same issue. I’ve been using Docker for Windows for months without any issues and then suddenly i can’t download any image and i can’t login using “docker login”. Although i still can login using Restarting docker, resetting to factory defaults, reinstalling Docker for Windows, restarting windows, fixed DNS - none of those helped to solve this problem.

I’m sorry it won’t help you guys. I’m using docker in Vmware Workstation 12 Pro on CentOs7 and reverted a snapshot back. This made it work for me.

Same problem here. None command works to pull images

Had same issue - changing password on docker hub and restarting docker just resolved it for me.

Previous solution only worked until I reboot my machine. Sigh.

But I’ve found another trick in my linux.

I did force the daemon to run through a proxy before connecting the the docker servers, as the was giving me timeout error and I could not even ping it anymore.

Following this instructions I could fix my problem. And that’s all I did:

1. Create a new directory:

$ sudo mkdir -p /etc/systemd/system/docker.service.d

2. Get a free http proxy from this list.

3. Create the file below and paste the http proxy server

# File: /etc/systemd/docker.service.d/http-proxy.conf

4. Reload systemd manager configurations:

sudo systemd daemon-reload

5. Restart the Docker’s daemon service:

sudo systemd restart docker

6. Check if it’s working:

docker pull alpine

Now it is working here, but the download speed is slow as it is not connected directly to the server. I googled a lot and I’ve no clue how to fix this another way.

Edit: Don’t use a proxy to log in as the connection goes through someone’s pc, they will receive your credentials. Today I removed the http proxy to login, and sometimes the connection is stablished, sometimes it isn’t. My network is slow since the past week. Perhaps that’s the reason?


I also used a similar approach to @b33j4y:

I created a new Docker ID that was related to my day job’s email addy, which implicitly included a password “set.” The container still would not pull down, same connection error.

I then did a Restart of the Docker Desktop, and after a successful reset, I was able to pull down several containers I needed.

Changing DNS settings solved the issue for me:


Great, thanks a lot! This also worked for me on Debian Buster. I was close to getting mad …

This worked for me. You saved me dude. Thanks a ton for your help!

yes what @tonihoo shared, worked.

For years it’s been simple to set up DNS on a Linux machine. Just add a couple of entries to /etc/resolv.conf and you’re done.

# Use Google's public DNS servers.
1 Like

I’m sorry it won’t help you guys. I’m using docker in Vmware Workstation 12 Pro on CentOs7 and reverted a snapshot back. This made it work for me.

If you are behind a home wifi router that is acting as a firewall, plug your computer directly into the cable modem via ethernet cable. do a docker login . You will get an error message about saving your password in plain text (base64 actually) in ~/.docker/config.json and it will log you in.
Go back to WiFi.
Take appropriate precautions to protect your credentials by specifying a credentials store in $HOME/. docker / config . json

This is the solution worked for me too.

None of these solutions work for me. From one to the other day the problem occured and there we are. I could use the solution of @efranelas for one or two times. But a day later the proxy was already broke again. And until then none of the listed proxies on the website works. One of them worked totally random but just for a few hours.

I really don’t know what to do. The errors I get with a added proxy vary very much. Sometimes it tells me
Forbidden or Bad Request or unexpected EOF or net/http: TLS handshake timeout and many more…

Any ideas?

I was having the same issue connecting to a private repository (in Azure). I’ve seen other forums suggesting the change of DNS. I query using my DNS and and both returned exactly the same.

Doing docker login didn’t work(Client.Timeout). But after I changed /etc/resolve.conf and put there, it worked (even tho it returned exactly the same info as my default DNS).

It also works if you use (DNS from CloudFlare).

So my suspicion is that the docker client has some issue resolving names when going thru the OS provided mechanism. But specifying a direct DNS server, it seems to work.

I retract that it has to do with the docker client. It has to do with the DNS resolver in Ubuntu 18 (my system). I started noticing some requests taking a long time in Postman for something else and that’s when I realized that DNS lookup was taking the longest time.
I put a hard-coded DNS in resolve (as mentioned before) and that fixed it.

First of all, try to open a new terminal window. It helped me

What worked for us is preventing network manager from modifying /etc/resolv.conf

remove this link:
/etc/resolv.conf -> …/run/systemd/resolve/stub-resolv.conf
and make resolv.conf a static file

only entry we have in /etc/resolv.conf


You may also need to run a connection specific DNS config (might not be necessary)
nmcli con show
nmcli con mod ipv4.dns “”
nmcli con mod ipv4.ignore-auto-dns yes
nmcli con down
nmcli con up

restart network manager and check resolv.conf to see that your changes are still in place.