From the remote host? That’s impossible. It is a bad practive to access a container by its internal ip. This is the reasony why no one actualy cares about the internal ips. You would need to manipulate the route for the 10.5.0.2 network of your 192.168.1.0 network to allow other hosts in your network to access the container directly - which would be a massive bad practive.
Do you know what it is used for? It realy does nothing unless you use container links. I consider it lagecy because as soon as you move your stuff to swarm, links will not work.
Won’t work with swarm either. To fix the availability problem you can tweak your nginx.conf to use the internal network dns server AND introduce a variable in proxy_pass to prevent it from caching the target ip, see: NGINX swarm redeploy timeouts - #5 by meyay (this even works for dynamic container ip’s). Nginx does not need to be started after the other containers, though the other containers need to be started to successfuly forward the traffic… which eventualy happens.
The whole world silently aggreed that Kubernetes is the orchestrator of choise. Kuberentes gives fine grained controll and is way more powerfull than docker-compose or swarm ever will be.