Docker Community Forums

Share and learn in the Docker community.

Is there a way to limit the number of maximum containers that can attach to a Docker Network?

Hi folks!

I am using Docker on shared super computer for users to deploy their images and run containers. The shared super computer has several GPUs, RAM and CPU cores on an Ubuntu distro.

I would like to constrain Docker container resources (CPU, RAM and GPU count), which I believe I have solution(s) to stop users hoarding resources and blocking other users from deploying and executing their containers.

One area that I am trying to figure out is how to constrain the number of IP addresses are consumed per container. I have a limited number of available IP addresses in the subnet for Docker to allocate containers (this is constrained in the Docker daemon.json). We also have a Docker manager app that is used to manage containers. For example, 2 containers may want to communicate to each other so they will share a Docker network between them to allow communication. However, in the case where there is a single isolated container, we create a Docker network to allow the each app and container to communicate. In this case, I would limit the number of the containers that can join this Docker network to just 2 containers (managing app and container). Is this possible via the Docker Engine API?

Trudging through the Docker Engine API, I think I can configure the IPAMConfig , but I’m not sure how it works as the documentation is making it immediately clear.

There are a number of system limits you can run into (and work around) but there’s a significant amount of grey area depending on

  1. How you are configuring your docker containers.
  2. What you are running in your containers.
  3. What kernel, distribution and docker version you are on.

The figures below are from the boot2docker 1.11.1 vm image which is based on Tiny Core Linux 7. The kernel is 4.4.8

Docker

Docker creates or uses a number of resources to run a container, on top of what you run inside the container.

  • Attaches a virtual ethernet adaptor to the docker0 bridge (1023 max per bridge)
  • Mounts an AUFS and shm file system (1048576 mounts max per fs type)
  • Create’s an AUFS layer on top of the image (127 layers max)
  • Forks 1 extra docker-containerd-shim management process (~3MB per container on avg and sysctl kernel.pid_max )
  • Docker API/daemon internal data to manage container. (~400k per container)
  • Creates kernel cgroup s and name spaces
  • Opens file descriptors (~15 + 1 per running container at startup. ulimit -n and sysctl fs.file-max )

Docker options

  • Port mapping -p will run a extra process per port number on the host (~4.5MB per port on avg pre 1.12, ~300k per port > 1.12 and also sysctl kernel.pid_max )
  • --net=none and --net=host would remove the networking overheads.

Container services

The overall limits will normally be decided by what you run inside the containers rather than dockers overhead (unless you are doing something esoteric, like testing how many containers you can run :slight_smile:

If you are running apps in a virtual machine (node,ruby,python,java) memory usage is likely to become your main issue.

IO across a 1000 processes would cause a lot of IO contention.

1000 processes trying to run at the same time would cause a lot of context switching (see vm apps above for garbage collection)

If you create network connections from a 1000 containers the hosts network layer will get a workout.

It’s not much different to tuning a linux host to run a 1000 processes, just some additional Docker overheads to include.

Example

1023 Docker busybox images running nc -l -p 80 -e echo host uses up about 1GB of kernel memory and 3.5GB of system memory.

1023 plain nc -l -p 80 -e echo host processes running on a host uses about 75MB of kernel memory and 125MB of system memory

Starting 1023 containers serially took ~8 minutes.

Killing 1023 containers serially took ~6 minutes

Thanks for the reply!

I’m actually looking at specifically docker networks and how to limit the number of containers that can connect to the docker network. The --net option looks interesting, I’ll take a look at that.

In terms of I/O, RAM and CPU, there are workarounds offered by the Docker daemon that let me control container resource allocation