When to use Docker-Compose and when to use Docker-Swarm

I’m trying to understand the differences or similarities between D-Compose and D-Swarm.

By reading the documentation I have understood that docker-compose provides a mechanism to bind different containers together and work in collaboration, as a single service (I’m guessing it’s using the same functionality as –link command used to link two containers)

Also, my understanding of docker-swarm is that it allows you to manage a cluster of different docker-hosts, each of which is running several container instances of some docker-images. We could define connections as overlay-networks between different containers in the swarm (even if they across two docker-hosts in the swarm) to connect them as a unit.

What I’m trying to understand is has docker-swarm succeeded docker-compose and overlay networks is the new (recommended) way to connect containers?

Or is it that docker-compose is still an integral part of the entire docker family and it is expected and advisable to use it to connect containers to work in collaboration. If so does docker-compose work with containers across different nodes in the swarm??

Or is it that overlay networks is for connecting containers across different hosts in the swarm and docker-compose is for creating internal links??

Besides I also see that it is mentioned in the docker documentation that --links not recommended anymore and will be obsolete soon.

I’m a bit confused???

Thanks Alot!

1 Like

My understanding is that Swarm overlay networks are now the way to connect containers. It’s made quite confusing because docker “stacks” use “compose” yaml files, and so it’s gets really confusing trying to work out what to put into the yaml compose file to get a swarm stack…

More or less.

There is a Compose file format (3.0+) which is used to create docker stacks that consist of docker services with just the (Go-based) Docker engine. The motivation for using the same file format is that it is easier to pick up for users already familiar with it.

Now, a bit confusingly, there is a (separate) Python program called docker-compose which does orchestration too. This is the “Compose” you’re likely familiar with. It accepts files in the same format and in the latest versions makes similar API requests as just (Go-based) docker binary. Think about the new stuff as mostly rolling into the Docker Engine, things which were only in Compose before. “Stacks” and “services” are slightly newer terminology, but the end goal is largely the same: container orchestration.

One Compose file (usually) => One docker stack.

If you don’t need --link, you can simply create a docker network and drop containers (part of a service) on it. They can then reach each other with a built-in DNS entry based on service name (e.g., if the service you want to reach is called db, db should resolve to the service’s “virtual IP” within the container.

1 Like

Thank You Very Much.

I’m trying to figure out whether there is a specific recommendation by the docker developers itself as to what needs to be used to connect containers that are closely related - compose or swarm overlay networks.

The main dilemma I have is that the idea of connecting containers via a network does not seem the same as connecting them together by something like compose (or are they the same???). Is it that compose like container binding is more secure than overlay-network style connection if the services are closely bound and can be within the same host?

Thanks
Shabir

The Compose format can be used to create networks. Example demonstrating an overlay network.

docker-compose.yml:

version: '3.1'

networks:
  mynet:
    driver: overlay

services:
  nginx:
    image: nginx:alpine
    deploy:
      replicas: 2
    networks:
      - mynet
  curler:
    image: nathanleclaire/curl
    command: sh -c 'while true; do curl -si nginx | grep HTTP; sleep 1; done'
    networks:
      - mynet

See how we created mynet and then assigned it to both services?

$ docker stack deploy -c docker-compose.yml example
Creating network example_mynet
Creating service example_nginx
Creating service example_curler

$ docker service logs example_curler
example_curler.1.n8zofsqwwhvu@awsvm    | HTTP/1.1 200 OK
example_curler.1.n8zofsqwwhvu@awsvm    | HTTP/1.1 200 OK
example_curler.1.n8zofsqwwhvu@awsvm    | HTTP/1.1 200 OK
example_curler.1.n8zofsqwwhvu@awsvm    | HTTP/1.1 200 OK
example_curler.1.n8zofsqwwhvu@awsvm    | HTTP/1.1 200 OK
example_curler.1.n8zofsqwwhvu@awsvm    | HTTP/1.1 200 OK
example_curler.1.n8zofsqwwhvu@awsvm    | HTTP/1.1 200 OK
example_curler.1.n8zofsqwwhvu@awsvm    | HTTP/1.1 200 OK
example_curler.1.n8zofsqwwhvu@awsvm    | HTTP/1.1 200 OK

You can create bridge networks too I think (default if driver: is not specified).

(Note: service logs requires the --experimental flag to be set in the daemon).

1 Like

Do we have to explicitly connect them via networks in a compose file as you have shown???

Or does compose create a default overlay-network and connects the containers in the compose file if we mentioned nothing?

Or are all containers in the compose file connected to each other via some other mechanism (other than networked) to be able to call processes in the other container and communicate?

I think you must define networks explicitly unless you use links: (which creates networks behind the scenes, and is somewhat deprecated).

No, you must use networks. This is for isolation purposes as well, since there might be some containers you don’t want to be callable from other ones.

Okay.

So if I understand right docker compose was created to define services using containers in a single file .yaml. This way you could also define networks to connect these containers together.

Swarm is an extension to deploy services/containers in a distributed fashion in multiple nodes. It also uses the same kind of .yaml like with compose to deploy services in the cluster. But swarm on top of the usual deployment and networks creation offered by compose also offers other stuff like defining in which node to run a specific service or to mention the number of replicas to be started etc.

Thanks Very Much!!!

Yes, pretty much correct. The most recent Compose functionality is built directly on top of swarm mode, which is a lower-level component and can be used directly via Docker CLI too, e.g., docker service create ....

Nathan so is Docker then basically recommending that going forward one should cease to use the Python’s docker-compose in favor of just using docker engine /docker stack that consist of docker services?
It sort of sounds like if I’m following the more generic way to configure/run your system between a development and production where the development is running on a single system but production could have container running across a number of systems/a docker swarm/cluster?

@macedemo did you get an answer to that somewhere else?

I’m wondering just like you and just like the author, when or why should I use D-Swarm and not D-Compose? All my nodes will probably will use a cloud server anyway, so why not running all the containers on the same server?
Why will I want to split my containers to different nodes?

Thanks :slight_smile:

This is an old post but I was still looking for this exact question. If you take a look at the Docker guides https://docs.docker.com/get-started/part3/, docker-compose is not used but docker stack is.

If I understand correctly, Compose requires a network section to be defined in the compose.yml file if you want container-to-container communication.

With docker-compose, on the other hand, this is done for you, e.g. in this demo from Confluent, the docker-compose.yml file contains only a services stanza and still all containers can talk to each other.

Is my understanding correct? Thanks.

The default driver depends on how the Docker Engine you’re using is configured, but in most instances it is bridge on a single host (when using docker-compose) and overlay on a Swarm.
Docker defaults to using a bridge network on a single host. Thts why containers talking to each other in above compose file.