At codebeat we use Docker Cloud for our static analysis backend. The core analyser app is a Go binary which shells out to individual language-specific parser binaries which export their respective languages’ AST to a common form which is then processed further by the core binary. This served us well up to a point where we support 8 languages with 5 separate parsers and the container is becoming very heavy, builds take longer and longer and we’re about to add even more dependencies to support further languages.
An obvious solution would be to turn individual parsers into separate services. Which is obviously an option but it comes with a price tag: we’d have to handle the HTTP transport, manage and monitor each of these services separately, suffer constant issues with the overlay network etc.
What I’d like to do instead is run these parsers as containerized binaries from within the core container - instead of shelling out directly to the local binary we’d do something like
docker run -it codebeat:parser-haskell, with
codebeat:parser-haskell being a hypothetical private Docker image.
Do you think it makes sense? Is it possible? Any advice how to go about that?