I want to monitor INTERNAL docker services

As title says, I know there are a lot of services that let me monitor the actual container but I want to monitor the service inside it.

Got anything?
Thanks

What do you want to monitor? Health? Then you can create a health check for the service inside the container, which can be monitored with container tools.

1 Like

Just one more note. Normally the container is the service inside it. Which sounds like a non-sense, but what I meant is that the container is the isolation of the processes and you normally have one service in a container so when you monitor the container you monito the service in it. But processes are still visible on the host, so if you have any monitoring system on the host that will see all processes regardless of whether they run in a container or not.

1 Like

I’m using docker as I’m developing and not only for deployment so I want to be able to see logs etc without having to go inside every docker, go to logs, grep for errors etc. Was wondering if there’s anything that will monitor that. For example an if I’m running an API and request that fails - will not bring down the process and will likely be lost somewhere in the logs and I won’t know about it unless I actively look for it, was wondering if there is an ā€œalertā€ system that will let me know of this without me having to be inside the container and look for it

I think Health check only really checks if the service si still up and running via a predefined ā€œpingā€

1 Like

I guess what I mean is are there tools that help with debugging/dev of dockerized workflows. I know that docker is for deployment and not really development but since I dockerize everything that’s also how I work in the dev stage.

So for example, I have 10 containers on my server, having to connect to each, go to logs and then grep whatever I need to debug seems inefficient, was wondering if there was a better way to monitor such logs etc. Most solutions I have come across seem to b for the actual container health and status instead of the service inside it.

Would this just be me fundamentally not using the product for what it is designed for? I’d imagine people with docker-centric workflows also dev in docker or should it be used for deployment ā€œprod readyā€ processes only?

Both of You might want to research on:

  • ELK stack (or its fork OpenSearch)
  • prometheus stack + loki

Both allow to ship container logs to their backend that persists the logs. Both have frontends that allow to query logs and configure alerts based on the logs. I find neither of both is simple to setup, without learning the solutions properly. The first is more resource intensive, but allows for fine-grained queries. I would use neither one on a developer machine.

1 Like

Yeah that’s what I’m seeing but as you stated that setup requires intensive setup and seems to be more for production level.

Any recommendations at all for something more straightforward or you would recommend for a dev server?

if your dev server runs on AWS, you could use the ā€œAmazon CloudWatch Logs logging driverā€, so that the container logs are available in CloudWatch, then using alarming on logs to actually configure and receive alams based on the log messages.

If the dev server is on another cloud hyperscaler: check their offerings, they should have something similar.

If it’s on-prem, then I don’t see a way around a solution like the before mentioned. Setup their instances outside your dev server, and only use the log shipper on your dev-server, and other servers if wanted. You typically end up having a logging solution instance for non prod, and prop workloads. In some companies this multiplies per team and/or project.

If it’s for a developer on a dev machine: docker desktop + log explorer extension make it convenient to search container logs. Though, there is no alerting.

Update: seems there is a lightweight solution: https://dozzle.dev. I have never used it and can so nothing about it.

1 Like

Thanks!

All our dev servers are on prem.

Looking into dozzle, it seems it only provides a snapshot of everything when the UI is opened and has no plans nor does the architect support alerts:

Definitely more to explore, thanks for the help :slightly_smiling_face:

1 Like

For plain debugging docker logs -f, docker compose logs -f and dozzle work, but debugging usually doesn’t need alerts.

If you want logs shipped, check open source Grafana Loki and VictoriaLogs, maybe Elastic stack.

2 Likes

@bluepuma77 I wasn’t aware of VictoriaLogs, it indeed looks like a promising lightweight solution to persists logs. Thank you!

While snooping their docs, I found a link to a comparison berween Elasticsearch, Loki and VictoraiLogs: https://itnext.io/how-do-open-source-solutions-for-logs-work-elasticsearch-loki-and-victorialogs-9f7097ecbc2f

2 Likes

Interesting, this seems like it would still align more with prod level workflow as it required Grafana Loki stack?

It seems most modern open source log analysis tools are always using 3 parts:

  1. Log shipping agent
  2. Log database
  3. Log GUI, like Grafana
1 Like

You don’t go to every ā€œdockerā€, if anything, you would go to every ā€œcontainerā€. Not the same thing, but after reading the topic, I think nobody recommended the journald driver yet. If you are on a server that has systemd and journald, you can set the container to send logs to the server’s journal logs and you can use the standard journalctl commands to search for logs. You could still use the docker logs command if you want, but that would read the journal log as well.

You wouldn’t have alerts unless you implement it or find a tool that supports journald, but for development, if you have healthcecks implemented for containers in a compose file for example, having all the logs in one place instead of having to run multiple docker logs command can help too. And it is very easy to configure even globally in the daemon json (but that requires recreating the containers)

Not realyl true. Some dev related configs could be tricky sometimes, like running the code step-by-step and checking the state of the variables in the meantime, but even that can be done. And it would be hard to make the software really compatible with containers if you don’t use containers during development.

1 Like

Okay so after trying to search for an ā€œout of the box solutionā€, I couldn’t find one so just decided to make it with a friend.

Introducing LogForge: GitHub - log-forge/logforge

Putting this up here in case anyone else needs something like this. Oh and if you do happen to use it at all, I’d love to hear feedback - the good, the bad and the bugs haha

1 Like

Thanks for sharing!

What’s the difference to dozzle.dev?

With so many different licenses around, why did you create your own?

1 Like

Hi, so the main difference is right here: RFE - Setup SMTP for sending alert from the logs based on entered keyword Ā· Issue #1086 Ā· amir20/dozzle Ā· GitHub

We built LogForge with that exact use case in mind. LogForge is not only a snapshot into the logs as the UI is open. It is continuously ā€œreadingā€ through the logs to alert on keywords - even when the UI is not open. Hence, continuously being able to monitor internal docker services.

I wanted to be able to have this running on my dev server at work, for me (and other devs) to be able to set alerts, go to sleep and wake up the next day and be able to open up the UI and instantly be informed about any issues that were caught during our off hours.

It’s also a very simple/clean process, most cases you just clone and run the build command and you’re done. In the most complex case you set the env vars as you see fit and all you pull to your machine is 4 files total - which I just find appealing. :man_shrugging:

We don’t have sending alerts via SMTP currently but that is in the works and will be added.

You’re not the first to bring up the license and I’m honestly not entirely sure why we went with a custom license, we’ll prob change that to MIT. But am curious as to if you believe that may impact adoption, your suggestion or overall opinion?

Thank you for sharing!

Regarding the license: some companies have compliance rules that only allow using components/software from a subset of the known SPDX license list:

1 Like

Redis just went back to a standard open source license, maybe check 1 and 2. They originally moved away like Elasticsearch, because third parties would use the open source software, host and run it for others as a service and earn money with it, without contributing back.

Corporate users like standard licenses, because their lawyers don’t need to review them, open source contributors also like standard licenses, because they know what they get themself into. The challenge is which one to pick, if you don’t want third parties to run the service for others, maybe because it’s your business model.

1 Like

Thanks @bluepuma77 and @meyay for more information regarding licenses,

The license has been updated to AGPLv3.

Hopefully that provides ease to anyone that wants to use it :slightly_smiling_face: