Hi, in our project we have a structure where an app inside of a container can start another container in the host. This is possible because this container has a volume where the docker socket is bound to the host docker socket volume in its docker-compose file.
The command that is executed is the next one:
docker-compose -f /docker-compose/full-path up -d --force-recreate
What is the issue?
We want to add the feature of limiting the resources that a container can use. So we wanted to add the deploy field and the resource limits. Usually, the deploy field is only availed for swarm but the resource limits are compatible without swarm, with docker-compose up works, this is mentioned in the docker documentation too.
The docker-compose file has the next option below the service name:
deploy:
resources:
limits:
memory: 5000M
If we execute this previous docker-compose command and use the docker-compose file with this option in the host directly, the docker is started and the limits are set up.
But if we do this inside of the container using the same docker-compose file and same command, the limits are not set up.
So the question do we have to add some options to the way we start the container if this one is executed from inside the container? or what can be the cause of this difference of behavior?
Are you sure about that? docker-compose ignores everything underneath the deploy configuration item. Maybee they added support for the sake of compatibility - if this is the case running the same docker-compose version inside the container should remedy the problem.
Though, if you run docker-compose I strongy suggest to stick to the 2.4 schema, as it aims for docker-compose support and provides the full feature set available to control every aspect of the container configuration. Version 3.x schemas on the other hand have configuration items that are simply ignored by docker-compose.
In 2.x the configuration items mem_limit and mem_reservation are direct children of the service declaration
Why would it matter if you controll the docker.sock from a native process on the host vs. an isolated process in a container? If the consumer is able to access (reachable+correct permissions) the socket based rest-api, then it doesn’t matter where the process is running.