I’m aware this is a bit of a strange use case, but it does make sense for a project we’re undertaking!
I’m trying to control the total bandwidth which can be sent across the bridge network from docker containers to the host, but whenever I try this, I seem to lose all connectivity. Before I try anything I can happily ping the host from a standard Ubuntu container:
# ping -w 1 172.17.0.1 PING 172.17.0.1 (172.17.0.1) 56(84) bytes of data. 64 bytes from 172.17.0.1: icmp_seq=1 ttl=64 time=0.044 ms --- 172.17.0.1 ping statistics --- 1 packets transmitted, 1 received, 0% packet loss, time 0ms rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms
On the host, I then set up my standard bandwidth limiting:
# sudo tc qdisc add dev docker0 root handle 1: htb default 1 # sudo tc class add dev docker0 parent 1: classid 1:1 htb rate 10mbps
But as soon as I’ve done this, I’ve absolutely no connectivity from the Ubuntu container to the host:
# ping -w 1 172.17.0.1 PING 172.17.0.1 (172.17.0.1) 56(84) bytes of data. --- 172.17.0.1 ping statistics --- 1 packets transmitted, 0 received, 100% packet loss, time 0ms
Everything works fine again if I remove the qdisc from the interface.
Am I doing something stupid here, or is there something subtle I’m missing?