Docker Community Forums

Share and learn in the Docker community.

Deploying in swarm mode - CI/CD

amazonwebservices
ci

(Kevin Carmody) #1

Hi,

I’m trying to work out how best to automate deployment with the new Swarm mode.

As far as I can see, there are a few options, but in each case I’ve mostly hit a blocker of some kind.

1. Via the dockercloud/client

This approach seems like it’d be the supported, as it’s the recommended way of connecting. But I’ve found it will only allow me to connect through interactive login.

2. Connect via swarm join

This seems the second neatest, but swarm init in dockercloud sets --addr-advertise to be a local network address, so one can’t connect from outside the network.

3. ssh into AWS, join swarm node and run shell script

I think this is kinda hacky… is this how people are doing it now?

I’d love to get some feedback / hear how other people are dealing with this.

Thanks,
Kevin


Dockercloud/client - non-interactive?
Issues running dockercloud/client on circle ci 2.0
(Sbrattla) #2

Is this question somehow related to how any particularities of Amazon Web Services; or are you asking about how to deploy to a Docker Swarm (running Swarm Mode) in general?

We’re running a self-hosted self-managed Docker Swarm (in swarm mode) and deploy using a combination of Jenkins, Python and SSH. I can’t speak for managed solutions, but for solutions which you manage yourself you somehow need to bridge the gap between (1) having built your code artifacts and (2) have those code artifacts run in production.

The managed solutions out there probably handle this for you in some way, but as far as I can tell there is no (self-hosted) “out of the box” software which handles the “I have all these images in my local Docker repository and now I want to deploy one of these images to a Docker Swarm”. Rundeck is - to my knowledge - the closest thing you get to this but you still need to set up deployment jobs. After all, it’s a generic piece of software.

What we do right now is that we build code artifacts and a Docker image for the artifact(s) in the same build job using Jenkins, and then have Jenkins to push that image to a local docker registry. Jenkins then log on (SSH) to each swarm node, pull the image, and updates the swarm service with that image. You could probably skip that SSH step if you don’t mind exposing dockerd on the Docker nodes’ network interface. We do this for staging.

We deploy to production using Jenkins as well, but then it’s just a matter of having Jenkins to log on to the production swarm, pull the same image we initially built for staging (we need to configure the Jenkins job for this on each deploy to make sure we deploy the correct image) and that’s it.

We’re still working on this setup. I’d say Rundeck would be much better for deploying to production, because you can have rundeck to fetch available images from the registry and present that in a GUI, and then consequently log on to each Docker node and run a script which will pull that image and update the service. It probably takes a little work, but then again most things do.

P.S. : We mostly use the REST API exposed by dockerd, as you get more functionality here. The docker client often lags behind and does not necessarily implement all the functionality you get via the REST API. Examples of this is network scoped additional DNS entries for services, meaning that a service can be assigned multiple hostnames within a given overlay network - and not just the default service name. This made a big difference to us, and is not supported by the docker client.


(Kevin Carmody) #3

Thanks for getting back to me.

So you’re doing SSH and running the commands from there, but using the Jenkins plugin. You could probably just use a single manager node to run the service upgrades, rather than SSHing into each.

Thanks for the tip of the REST API, I’ll look into that.


(Jpatters) #4

You can get around interactive login for option number 1 by specifying -u and -p when running dockercloud/client. See dockercloud/client help for more info.


(Stephenlautier) #5

I’m also trying to setup the CI/CD, I’ve managed to connect using the client as @jpatters using the client; however, I have a silly question, what’s the best way to connect to the swarm? Since after running the dockercloud/client it outputs the DOCKERHOST to be used but not in a “clean” way.
Should I simply parse the output? or is there a better way of doing it?


(Stephenlautier) #6

I had tried using the dockercloud/client however with no success, since im using CircleCI 2 and I have an issue when I switch to the other docker host stating

Cannot connect to the Docker daemon at tcp://XXX:XXX. Is the docker daemon running?

As explained in http://stackoverflow.com/questions/44009173/connect-to-docker-swarm-for-continuous-deploy

May I ask you how did you conclude @skinofstars? Since I have the exact same scenario. Thanks!


(Jpatters) #7

I am using CircleCI 2.0 as well and this is what I ended up doing:

docker run --rm -ti -v /var/run/docker.sock:/var/run/docker.sock -e DOCKER_HOST dockercloud/client -u $DOCKER_USER -p $DOCKER_PASS <swarm_name>
docker run -e DOCKER_HOST=172.17.0.1:32768 -v /var/run/docker.sock:/var/run/docker.sock -d -t --entrypoint /bin/sh --name docker_proxy docker:17.03.1-ce
docker exec docker_proxy docker login -u $DOCKER_USER -p $DOCKER_PASS
apiexists=$(docker exec docker_proxy docker service list --filter name=api | wc -l)

if [ "${apiexists}" == "1" ]; then
            echo "Creating service"
            docker exec docker_proxy \
              docker service create \
              --with-registry-auth \
              ...
else
            echo "Updating service"
            docker exec docker_proxy \
              docker service update \
              --with-registry-auth \
              ...
fi

Since CircleCI uses a remote docker install, you can’t just set DOCKER_HOST locally since it will just override the value for the remote host. So I create a container on the remote instance to act as a proxy and set DOCKER_HOST inside of it and then run subsequent commands in there.
Note the use of --with-registry-auth to ensure that the credentials are used on all of the nodes in the swarm.
Happy to explain further if needed.


(Stephenlautier) #8

Thanks, @jpatters!

I will give this a shot tomorrow, hopefully, they make this a little bit simpler but at least I will have something in place.
Just a question, how should I obtain the -e DOCKER_HOST=172.17.0.1:32768? should it always be like that? If not then will it work for each CI build?

Thanks for your help.


(Jpatters) #9

That is the internal IP address that gets assigned to the first container run by the remote docker host. So if you are creating other containers in your build process then that will need to be adjusted. And the port is the one that dockercloud/client tries first to bind to.
This is definitely a hacky solution. Just sharing what I managed to get to work.
I suspect that a better solution would be to parse the response from dockercloud/client and use that. I haven’t tried it yet to see if that IP works though. It returns the external IP for the host running docker so I’m not sure if it will resolve properly from inside the container.
Linking the containers could also work. I’m not sure off the top of my head how to get the port although I know it can be done.


(Stephenlautier) #10

Unfortunately, as it is dockercloud/client with CircleCI 2 doesn’t play well, due the way it works, so I understand for the hacky solution (hopefully they help out).

I had managed to do the parsing of the dockercloud/client response (thanks to my friend Alan Agius for helping out), perhaps it can help you out :slight_smile:

CONNECT_RESULT=$(docker run --rm -ti -v /var/run/docker.sock:/var/run/docker.sock -e DOCKER_HOST dockercloud/client -u ${DOCKER_LOGIN} -p ${DOCKER_PASSWORD} $SWARM)

HOST_REGEX="DOCKER_HOST=?(.+)"
[[ $CONNECT_RESULT =~ $HOST_REGEX ]] && DOCKER_HOST=${BASH_REMATCH[1]}
echo HOST=$DOCKER_HOST
export DOCKER_HOST=${DOCKER_HOST}

it will print something like following:

tcp://xxx:xxx


(Stephenlautier) #11

@jpatters I manage to get your solution working in order to connect to my swarm thanks a lot!

I still have one issue tho, do you have an idea on how I can access a file e.g. docker-compose.yml from the proxy?
I tried

docker run -e DOCKER_HOST=172.17.0.1:32768 -v /var/run/docker.sock:/var/run/docker.sock -v $PWD:/app -d -t --entrypoint /bin/sh --name docker_proxy docker:17.03.1-ce

# note 
-v $PWD:/app

Folder gets created however source is not available

This should be my last step!


(Stephenlautier) #12

update: it worked!

copied manually rather than using volume

docker cp stack.yml docker_proxy:/app/stack.yml

(Jpatters) #13

Yeah. Again, that is because it’s running on a remote host. $PWD isn’t on the same machine that you are running the command from.
Also, I opted to create the services individually instead of using a compose file and docker stack deploy. As far as I know, docker stack deploy doesn’t re-pull tags (ie latest). So instead I tag my images with the circleci build number and run docker service update --image <image>:<build number> <service name>.

I just found I finer control over stuff this way.


(Stephenlautier) #14

Yes, I thought so that it’s because of the remote, it’s a bit confusing with the 3 containers.

So far from my previous tests, I was using docker stack deploy in order to deploy (from my machine) and it seemed that it was working fine. We are only doing it in order to deploy 2 services within the same solution.

But thanks for the heads up :thumbsup:


(Akath20) #15

This! How! Please give an example!


(Jpatters) #16

Since posting this I’ve played around some more and changed the way I am doing things a bit. I’ve switched to using docker stack deploy (as it does re-pull tags) which made things a bit more complicated because we are working with a remote docker host. Here are the commands that I am using.

# Build the image
docker build -t forestryio/app:$CIRCLE_BUILD_NUM -t forestryio/app:$CIRCLE_BRANCH -t forestryio/app:latest .

# Log in to docker
docker login -u $DOCKER_USER -p $DOCKER_PASS

# Push the tags
docker push forestryio/app:$CIRCLE_BRANCH
docker push forestryio/app:$CIRCLE_BUILD_NUM
docker push forestryio/app:latest

# Run the dockercloud client so we can deploy
docker run --rm -ti -v /var/run/docker.sock:/var/run/docker.sock -e DOCKER_HOST dockercloud/client -u $DOCKER_USER -p $DOCKER_PASS forestryio/production

# Build an image you can use to deploy from (see next code block for contents of Dockerfile-deploy)
docker build -t deploy -f Dockerfile-deploy .

# Run the deploy image with DOCKER_HOST set to the ip and port for the dockercloud client container
# and make sure it stays running (it just opens bash and sits there) and detach from it
docker run -e DOCKER_HOST=172.17.0.1:32768 -v /var/run/docker.sock:/var/run/docker.sock -d -t --entrypoint /bin/sh --name deploy deploy

# Log in to docker from the deploy container we just started
docker exec deploy docker login -u $DOCKER_USER -p $DOCKER_PASS

# Deploy the app with docker stack deploy from inside the deploy container
docker exec deploy docker stack deploy --compose-file docker-stack.yaml --with-registry-auth forestry

# Cleanup
docker stop deploy
docker rm deploy
docker rmi deploy

And the contents of Dockerfile-deploy

FROM docker:17.03.1-ce
WORKDIR /usr/src
COPY docker-stack.yaml /usr/src

So why do we have to do it this way?
Since we are using a remote docker host if we run docker stack deploy --compose-file docker-stack.yaml, that gets run on the remote host. And the remote host doesn’t have access to docker-stack.yaml; it’s on the local host.
A remote docker host also means that we can’t mount the docker-stack.yaml file inside a container using a volume since volumes are local as well.
So what do we have? We have context. So we build an image that sends the local context to the remote host, copy over docker-stack.yaml and then use that image to run a container we can deploy from.

I’m happy to explain further if needed or help out if the configuration isn’t working for you.