Slow network when using bridge driver

I am experiencing slow network speeds when using the bridge driver. Downloads using curl are taking orders of magnitude more than the same file from the host. I am not sure how to diagnose the issue.

Setup: I have docker installed inside of a Linux VM running on Windows Hyper-V

After noticing the slowdown I used netshoot to run speedtest and it confirmed what I observed from curl. The download speed is 100x slower than the host, the upload speed and the latency seems to be more or less the same as the host. When I pass the ‘–network host’ flag to docker run the download speed matches the host.

I have tried searching online for similar issues but I can’t seem to find any. I am quite new to docker and feel quite stuck with this issue, what can I do to better diagnose what is going on?

Maybe a difference in MTU?

checking ip address from the host and netshoot container yields the output below. The bridge interface has an MTU of 1500, same as the host interface, and from the container the same amount is configured.

Host:

1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
    link/ether 00:15:5d:01:9e:07 brd ff:ff:ff:ff:ff:ff
    inet 172.31.145.226/20 brd 172.31.159.255 scope global dynamic noprefixroute eth0
       valid_lft 86078sec preferred_lft 86078sec
    inet6 fe80::767b:a74b:67b9:4720/64 scope link noprefixroute 
       valid_lft forever preferred_lft forever
3: docker0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default 
    link/ether 02:42:c7:8e:e0:8c brd ff:ff:ff:ff:ff:ff
    inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
       valid_lft forever preferred_lft forever
    inet6 fe80::42:c7ff:fe8e:e08c/64 scope link 
       valid_lft forever preferred_lft forever

Container:

1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
887: eth0@if888: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default 
    link/ether 02:42:ac:11:00:02 brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet 172.17.0.2/16 brd 172.17.255.255 scope global eth0
       valid_lft forever preferred_lft forever

Than please share mor einformation about your environment. The Linux distribution and version, and how you installed Docker.

So yesterday I had an update from Lenovo for my laptop, after restarting the machine and rebooting into the VM the speeds were similar for both docker and the Linux VM. I have tried restarting the VM before but it did not fix it. So maybe its something to do with the Windows host?

The environment: I have an Linux Ubuntu 22.04 Virtual Machine running on top of Windows 11 22H2 Hyper-V. I have installed docker-engine by following these steps for Ubuntu.

I don’t think so unless updating the host changed the virtual machine and its network configuration. I don’t have much experience with Hyper-V, but If you have only one network in the VM (besides docker networks), the speed should be the same in and outside of docker containers unless there is some network package loss. That is why I suspected the MTU difference.

I’m not a network expert either, but I have some network debugging tips in my blog post:

you can use the netshoot container to connect to trace where packages are rejected or where the response comes slower, but I can’t give you a specific command to try. I would just try tshark on the host and in the container too while sending packages over the network.

I thought I saw that the Lenovo update changed the Intel networking driver, but I checked again and it does not seem to be the case. Thanks for your reply and information. I’ll see if it happens again I will give your post a try. For now it seems stable.