Maybe the processes inside the containers are doing something that requires more memory when the host is not fully started yet. You hhighlighted some of the containers, but some containers use even less memory than before, so it should depend on what the processes do.
I don’t know. Maybe someone who knows those applications can give you some idea what it could do differently when it starts.You can try enable verbose/debug logs in the containers and hope it logs more information. You can try to delay starting Docker jut to test what happens if it has more time before starting. You would need to add a sleep in the Systemd service:
i execute docker builder prune and i tried different tests
stop container , stop docker service and reboot same problem
i test the different parameter in the docker.service
ExecStartPre=/bin/sleep 10
ExecStartPost=/bin/sleep 10
ExecStop=/bin/sleep 10
but same problem
next reboot, i launched sudo docker restart $(sudo docker ps -q) and the ram is ok, i’m searching already another way
i tried also
sudo systemctl disable docker.service
sudo systemctl disable docker.socket
and reboot same problem after to launch manualy docker service
the problem is the reboot,@rimelek you say : maybe. the processes inside the containers are doing something that requires more memory when the host is not fully started yet
i reactivated the swap , but same problem
perhaps is it the cgroup_enable=memory?
perhaps after reboot the system doesn’t refresh the memory value
Sorry, but wasn’t around much. I can’t give you more ideas. Docker would not just “give” processes more memory. If you see in the statistics that the applications used more memory, then some state is different which makes the processes use more memory. What it could be caused by, I don’t know. If it is related to the containerized environment, maybe it has to do something with the overlay filesystem, but I can’t imagine how. If the additional memory usage is caused something that is not fully running yet when Docker starts, that is hard to debug. Sleep was just an idea, but apparently it didn’t help. If you find the answer, I appreciate if you share it. Until that, is this problem critical for you at the moment?
Cleanups have nothing to do with how much resources working containers use. If there is some indirect connection, I have no idea what. When a container is killed forcefully after 10 seconds, that means the process inside doesn’t handle signals correctly. That is something you should fix, because it makes stopping containers slower and it could lead to data corruption, but I don’t think it makes other containers use more memory.