Docker Community Forums

Share and learn in the Docker community.

Docker 1.12 Swarm Mode - don't want to use service


(Hstang9876) #1

Hi,

Can I still take advantage of Docker Swarm, but I don’t want to use service? Instead, I want to do “docker run” and have Swarm take care of scheduling which node to create the container.


Single containers in a swarm - spreading load?
(Nathan Le Claire) #2

That’s more the way that Swarm “legacy” does things, might want to consider using that, but why not use a service? It’s quite a nice abstraction


(Hstang9876) #3

Our containers are used for running short-living processes, which is created on-demand, and not long-running ones required by services. Can the new Swarm mode still be able to schedule containers the same way that legacy Swarm can do? We like to still have all the goodies that come with 1.12 like security, health checks, etc.


[Docker swarm] Access services
(Nathan Le Claire) #4

Why not have a message queue and pull messages off the queue from another service when needed? There are a couple of different approaches to this problem that don’t mandate doing a docker run for each “action”.


(Hstang9876) #5

I need each of my “short-living” processes to run in its own container, and not some service that listens to a queue and processes everything in the same container. It’s important that each of my task runs own its own container. It sounds like the new Swarm mode don’t support this scenario. What’s the future of legacy Swarm? Is it deprecated?


(Patrick) #6

We’re also considering using the swarm for relatively short-lived services (5-90min): to execute Jenkins (continuous integration) jobs.

@hstang9876 You mention [quote=“hstang9876, post:3, topic:20139”]
not long-running ones required by services
[/quote]

So far we haven’t seen any indication that services must/should be long-living? Couldn’t you spin up a service that consists of a single task for your scenario as well, then destroy it when it’s done?


(Hstang9876) #7

When in service mode, swarm mode takes over and they manage the lifecycle of your service. The current behavior is that Swarm will keep spinning up more containers after they have been destroyed just so they can meet the desired # of replicas. This is not what we want, obviously.


(Bill Anderson) #8

You can use “docker service scale myytask=0” to stop the mytask service container(s) w/o it recreating them. It will stop the container and not spin up a replacement. Change “0” to “1” and it will start up 1 new container.


(Michael Romanchuk) #9

First, I feel like you guys are breaking a previously established contract, and also locking me into your particular implementation for containers-at-scale. For example, similar products (e.g. Mesos) and even cloud services (e.g. Azure Container Service) are supporting the Docker Remote API for managing containers-at-scale. Management of the cluster is then an API on top of / besides Docker Remote API.

Under that contract, I can develop and test on a single Docker Host and then take this as is, with no change to any container-at-scale provider that honors the contract. I might use one one thing for smaller/internal jobs and then use another if I want to use a public service provider. I can also easily move to a different provider if I am unhappy.

Not honoring the contract is a major negative when assessing Docker’s solution compared to the competitors.

I think it is good for you to expand what Swarm can do. But I think it is wrong to limit it and deprecate something that is so widely supported by the competing technologies.

Second, the workaround you are providing is undocumented (https://docs.docker.com/engine/reference/commandline/service_scale/). This doesn’t give me any confidence in the new contract you are proposing.


(Bill Anderson) #10

I’m neither proposing a “new contract”, nor a workaround. I’m referring to exactly what is documented:

If you scale a service, you set the desired number of replicas.

(emphasis added)

Desired number of replicas being 0 is perfectly valid, and not outside of the above. Nor is this unique to Docker and its Swarm mode. Indeed for 0 to not be a valid number would certainly require additional documentation for it having special behavior.

This was in response to the claim that you could’t do this. You can use the as-documented services feature and still have node distribution of one-off tasks. You should absolutely be able to stop all services and not have to redefine everything when you are ready to bring them back up. Setting scale to 0 isn’t even something I bothered to look up before doing it. It just fit, made sense, and works.

If you want the “old swarm” remember that it was actually a separate service, not a part of the docker daemon. It being an API that sat on top of Docker. So you could probably still use it if “scale=0” is just too egregious in exchange for dead-simple setup. Because it was a separate API on top, I don’t agree with your assertion that a “contract” has been broken. I used “old Swarm”, and I enjoyed it. But the new swarm is far easier IMO and I find the tradeoff worthy.

And just so you’re clear: I don’t work for Docker. I’m just a user, same as you.


(Patrick) #11

Sorry if I was unclear, but by “destroying it when it is done”, I meant removing the service after the task is complete (docker service rm / the respective API).
For my case this will likely work (because if implemented similar to today, it would keep track of the service and delete it once the task completes), whether it would be applicable for your case, I can’t say.


(Nathan Le Claire) #12

Why? It’s hard to help come up with a solution without knowing why running each job in its own container is a requirement. Pulling messages off a queue and processing them is a good pattern for many use cases. Why doesn’t it work for yours? If you’d like to do several concurrently you could set the queue container to have multiple replicas.

BTW, I’ve filed an issue discussing batch / cron type Swarm mode use cases here: https://github.com/docker/docker/issues/23880


(Ray Johnson) #13

I have the same concern. We put some rather complicated jobs in to Docker containers. We need them to run anywhere from once ever 15 min to once a day and some are once a month.

Currently we use Rundeck - Runback runs some custom code that then launches the particular job in a container on swarm.

It is a varsity of jobs - one does revenue recognition once day writen by a third party in Java. One is a job exporting data from a SAAS system into Kafaka queues once ever 15 min. We have many other jobs as well written in different technologies.

You approach of taking things off a queue is fine for subset of use cases but is horrible for ours.

I agree with the original poster - I’d like to have a “service” option, if you will, that will run a job and when it exits - it does not get restarted.

Now we do monitor those jobs and and collect the logs, etc. We could scale the service down and delete the service at the end of each run. However, I’d think I would have a race condition where a new instance could be starting up and maybe even starting to execute before the notice to scale down arrives. No? This is not ideal and would be a waste of resources - particularly as we scale up to hundreds of jobs.

Ray


(Dirk Franssen) #14

In case this thread is still open, one could consider the --restart-condition none flag so that the container doesn’t restart forever