Starting / Running multiple containers and network host help

Hey all, first post and still learning but of course time is always the enemy. So, I was given an image and said “we are going to have hundreds of connections”. The entire image is rather small and under python 3.10, and I have a few AWS server types I am testing, from the t3.small to a c3.large to find a sweet spot of CPU compared to container count. So the developer gave me the basic start command, I did some quick reading to try to start understanding what does what, but trying to understand the best practice for performance and redundancy. This is hosted in AWS, and I made a quick loop script to start 10 containers, this is the only part that is relevant

for I in {10..19}
  do /usr/bin/docker run --restart always --detach --name docker-wxa$I -p 80$I:80 -v /var/run/docker.sock:/var/run/docker.sock aws-ecr/repo
done

When that runs, I get 10 containers, each on their own port 8010 - 8019. I can curl each, get the response, then with the load balancer, I can add each port and have 10 healthy nodes. The first issue is when we start to stress this just a little, the response times are terrible, so reading up, a lot of discussion is using the network host and not the ports. So when I remove the port, add a --network host (keeping it in the loop), and start, I see the containers all start up, and each appears on port 80 (which is fine). Response time was slightly better, but I am also looking at the --restart always flag, and when under stress, a docker container ls shows them, however a lot keep restarting (not sure if that is CPU / stress also).

Lastly, trying to see where the default start is for docker as I have a startup script, but the service is starting first using the old config, I need to stop that, then manually start from my script.

So in a recap (and make it questions).

  1. Best practice for multiple containers, network host or multiple ports?
  2. Better to have multiple small machines with less containers or larger with more containers?
  3. Default config/startup

I did start reading about docker swarm but trying to see if that is needed, suggested, etc.

Thank you again and looking forward to learning this as I can see a lot of use for it!

Hi.

If your containers are the same, i would use: docker service create | Docker Documentation, then you can scale the service up and down as you go.
Swarm is nice if you have multiple servers and want to deploy your containers across the cluster.

I personally go for smaller servers, since “not much will die if 1 host dies”, but i think its a personal preference.

Thanks for the feedback.

I do have multiple servers and multiple containers, so going to read up on how the docker swarm works. I just need to see the best/easiest way to have x amount of servers and containers running, a simple health check and behind an AWS ELB load balancer. The first way I did it with multiple containers on multiple ports was a good visual way to see each one (on a 1 core 2 virtual core) I was able to have 5 containers running before the CPU really started spiking and response times suffered. So I am just not sure on probably the terminology that I can have multiple containers running, I would assume multiple ports, but again, still learning how to start, monitor, etc. and appreciate the help!

It depends on the requirements of your application and the number of instances you want to run on a single node. In production cpu/memory reservations and limits are useful to make sure the hardware is not overprovisioned. You need to measure your application to identify the sweat spot for the cpu/memory reservations and limits. This requires serious testing with real life workload, and not just running a bunch of containers in idle state.

Have you looked at AWS ECS, it provides a managed container service, which can either run with ec2 nodes or Fargate managed nodes. Though, typically people end up using AWS EKS, which is a fully managed Kubernetes service.

I share @terpz view that multiple smaller instances are better than a few bigger instances. Though, you need to put into consideration that you will have to deploy additional containers for monitoring, log management, and probably tracing.

I would consider a t3.small instance as unsuited for container engine hosts: burstable instances are only cheap when they are not constantly at their capacity limit + 2gb ram is not really much to work with. Personally I would not use less than 2 cores and 8gb instances, and typically prefere 4 cores and 16gb instances. From my experience m4/m5/m5a/c4/c5/c5a are solid instance types for container engine hosts.

Sorry for the delay, holiday season is always crazy.

As for ECS, yes, I did look at that, the issue however this is such a micro codebase, I am running 5 containers on a t3.small, so when I tested using ECS, it was overkill on the price, as I saw it was a 1:1, so if I can get x containers on 1 small, I didn’t see a way in ECS to say start a large and start x containers, but can re-visit if that is possible.

For log management, the developers are writing directly to a cloudwatch stream (with custom alerts on ‘warning’ and ‘critical’ messages, so that is working as expected.

But where I am not happy to hear the T.x series is not suitable, at least it does explain a bit more as to why the latency. So I am able to get 10 or so on a c3.large with better performance, but think it does circle back to what we said and finding a sweet spot. I didn’t think such a small python codebase would be a bit demanding, even with just 5 nodes, but with real testing it just seems a bit more par for the course.

So with the hardware part understood more, I think the only open item I need to understand better, is the multiple container / network host part and how to best run this.

As I said, I can start 10 containers, each on a different port, register them in a target group per port and let the ELB/health check determine if its healthy, or start some reading this week on using docker swarm with a swarm manager to control things, so any .02 on that as well is appreciated.