I have a script I am running to monitor memory across docker containers and report it back to statsd.
It all boils down to being able to run:
docker ps and
docker exec container ps aux and parsing results.
The parsing logic is involved so I coded it in Ruby.
If I am to go with the “traditional” approach here I would just deploy the script and Ruby on all the machines and ensure an appropriate upstart job is about.
However, I would prefer to use docker for the packaging which would make updating stuff easier.
If I start up Docker in privileged mode will I be able to run nsenter on other running containers on the box?
If that is too tricky, how do I set up a container to perform this via the API without punching open gaping security holes?
Any other ideas on how to achieve this?
I dont think you need dind ( docker in docker ) to do this. One approach i used is to mount the docker’s unix socket inside the targeted container. For example:
docker run -d -v /var/run/docker.sock:/var/run/docker.sock monitor-image
The docker client inside
monitor-image will then be able to talk to the host
Brilliant, sounds like a very good plan and reasonably safe!
You will probably also need the docker client bin too, and mount everything read-only:
docker run -d \
-v /var/run/docker.sock:/var/run/docker.sock:ro \
-v /usr/bin/docker:/usr/bin/docker:ro \
This the approach I use anywhere that I don’t need a different version of the Docker daemon running.
IMO dind is only really needed when testing a development Docker daemon - end even then, I like micro VM’s for that.
Its been pretty solid over the weekend, I deployed this:
Launch the container and allow it to talk to the docker socket with restart always and now data is flowing in beautifully and I have pretty pictures to show. Don’t have to worry about machines rebooting and so on.