Share container services

Hello,

currently I‘m trying out docker a little bit and it seems that I‘m really confused with the use-cases of docker at the moment.
I thought docker is a way to build modularised (micro-)services, that can be easily put together. I mean, yes, this works for some cases, like services that expose a port, e.g. the classic apache-php combination, but not for every binary.
What I really like to do is having a container A, depending on a service, that is exposed by container B. For example B is housing the node binary and from A I want to run a node script using the binary in B. When I want to try out a different node version I can easily swap/rebuild B with this different version without needing to change anything in A.

At least on the host this is possible, when you define the binary as the entrypoint of B you can docker run B {args} and it works flawlessly.
And as far as I know to do this from A, you only have the following options:
1 - Mount the docker socket into A, which seems to be a huge security flaw, because then you can manage the complete docker environment in A.
2 - Go for the docker-inside-docker approach and create B inside of A (so A is the host of B), which is considered bad practice as it has major flaws.
3 - Run a server in B that is used as a interface to the binary.
(sorry for the formatting, it seems that numbered lists don‘t work atm)

In my opinion in a perfect world you would be able to just link together A and B and then are able to use the exposed binaries/services as if they were directly included in A.

Is there really no other option or plan to make this kind of service composition possible? Or is docker just the wrong place for this kind of use-case?

Thanks!