Is it possible container can fully access host machine?

Im building a monitor service that need to access multiple container log and system stats, but it seems to be impossible to access other container stats from monitor container, so i want to choose to monitor host machine as alternative solution. So it is possible to docker compose? Or i need to use docker with kubernetes?

The title of this topic is:

Is it possible container can fully access host machine?

which doesn’t make sense, since containerization is about isolation. Full access means there is no isolation so no container. If you disable all the isoaltion except the mount namespace which is not possible to disable, that is almost full access since you can also mount the root filesystem from the host into the container to a subfolder. And there are also kernel capabilities and the privileged flag, but let’s see what you actualy want.

There are existing monitoring tools like prometheus + Grafana. So you probably don’t have to create your own. If you already have one, if you share what you are using maybe we can help you with that. You can check how these monitoing tools work. If communicating with the Docker API is enough, you can mount the docker socket into the container and use that. If you need logs, you can also use he logging drivers of Docker: Configure logging drivers | Docker Documentation
and send the logs to a central logging service.

You can use Grafana Loki too.

These tools almost always describe in the documentation what you need to do in order to use it with Docker if that is supported.

If you are still interested in capabilities, you can read about them here:

No. You don’t. Some tools may work only with Kubernetes but usually Docker without Kubernetes is also supported.

I have postgresql, redis, web applications and use elastic agent to monitor them, so i want to find a solution that avoid making dockerfile. I ever try to mount all log files to elastic agent but it can not monitor other container stats, so i wonder if elastic container has some ways to access other containers stats or monitor host machine directly.

I think I understand your problem. The documentation of Elastic Agent is not really clear about how you can use it in containers. It has an example compose file, but doesnt mention mounting the docker data root to the container (at least I couldn’t find) and for monitoring other containers (not the agents) I couldn’t find anything. Of course learning about a tool usually requires more than just a couple of minutes, so it is possible that I didn’t use the documentation correctly, but ElasticSearch+Kibana was primarily invented for logging originally and I have never used it for collecting metrics yet.

You can try setting the “privileged” flag in docker compose for the service which is usually not recommended and set “pid: host” to use the host’s process namespace. That could help the elastic agent to see other processes if it is enough.

Even if it helps, you should ask for better help on the Elastic forum as I am not sure I gave you the best advice:

Elastic Agent need to install integrations to deploy collectors(e.g. collecting nginx log, network traffic). In ordinary it means i need to install all of them in one container. But i saw official kubernetes example using hostPID: true and hostNetwork: true to make elastic agent can monitor host processes and network.

Run Elastic Agent on Kubernetes managed by Fleet

This is why I wrote

which is the alternative to hostPID: true. Instead of hostNetwork: true you can use network_mode: host in a compose file