Linux network QOS within Container

I’m working on setting up a lab to test some things out. One of the things I need to validate is how the application works under different network conditions.

Everything here is on a beefy system within their own containers and runs great until I introduce network issues which seems to bring up some docker/container issues.

The environment is all defined within the same compose file and all have the necessary ports exposed.

Container1: Client1 - Communicates with agents via haproxy
Container2: haproxy - Used to connect Client to servers
Container3: Server1 - Simulate packet loss
Container4: Server2 - Simulate limited bandwidth
Container5: Server3 - Control

Agent containers are all running a CentOS base.

There’s a script running on Client1 that sends and receives data to the servers through the haproxy.

On Server1: Packet Loss
Requirement, drop packets destined for haproxy but not any other target.

tc qdisc add dev eth0 root netem loss 20%
The above command will cause any packet regardless of source/dest to have a 20% chance of being dropped.

Not Working
tc qdisc add dev eth0 root handle 1: prio
tc qdisc add dev eth0 parent 1:1 handle 2: netem loss 20%
tc filter add dev eth0 parent 1:0 protocol ip pref 55 handle ::55 u32 match ip src <IP of HAPROXY> flowid 2:1

As soon as I execute:
tc qdisc add dev eth0 root handle 1: prio
the network gets all wonky. None of the other containers can ping Server1 but they can ping each other (Server2 <->Server3), Server1 cannot ping anything. Any ping from Server1 will result in:
connect: No buffer space available

As soon as I clear the tc rules, everything goes back to normal.

I believe the issue has to do with tcp window sizes, but i’m not sure where to look for a solution.