Docker Swarm with Ansible - a late #swarmweek entry


In this tutorial we'll learn how to set up a Docker Swarm cluster using Ansible to orchestrate the basics.

Provision some nodes

First up, let's provision some machines to set this up on. I'm using DigitalOcean in this example, but you can use whichever cloud provider you like.

I've created three nodes:

  1. Manager
  2. Replica
  3. Node

I have the following inventory file to represent these:


[node]  private_ip= ansible_ssh_user=root

[manager]  private_ip= ansible_ssh_user=root

[replica] private_ip= ansible_ssh_user=root

(for simplicity, I'm using the actual IPs. Don't worry, this swarm no longer exists ;) ).

We'll basically be following along the official tutorial: Build a Swarm cluster for production

First-up let's bootstrap our nodes. We'll need docker engine and consul on each node. I'm not going to go into detail on these roles, but you will find them in the accompanying github repo for this tutorial. This tutorial assumes that you have the accompanying roles.

Install the basics


- hosts: 
  - all
    - initial_cluster_size: 3
    - name: Install ansible requirements
        name: "docker-py"
        state: present
        - swarm
    - ubuntubase
    - docker
    - consul

  • initial_cluster_size is used to help consul bootstrap it's cluster. Set this to the size of your initial cluster.

and run the playbook:

ansible-playbook swarm.yml -i swarm_cluster

Set up the Consul cluster

Ok. So now we have docker and consul running on our server. Let's ssh in and check what's what with consul:

ssh root@
# see the members:
root@manager:~# consul members
Node     Address             Status  Type    Build  Protocol  DC
manager  alive   server  0.6.3  2         dc1

# we have no members, so we need to manually join our nodes:

root@manager:~# consul join
Successfully joined cluster by contacting 2 nodes.

# now we have a consul cluster
root@manager:~# consul members
Node     Address             Status  Type    Build  Protocol  DC
manager  alive   server  0.6.3  2         dc1
node  alive   server  0.6.3  2         dc1
replica  alive   server  0.6.3  2         dc1

You can also run: consul monitor to view the logs. Hopefully you should see that a leader has been elected:

[INFO] consul: adding server foo (Addr: (DC: dc1)
[INFO] consul: adding server bar (Addr: (DC: dc1)
[INFO] consul: Attempting bootstrap with nodes: []
[INFO] consul: cluster leadership acquired


  • Consul is clustered, so it doesn't matter which node you log into.
  • Running join will join our nodes into a cluster. This is supposed to happen automatically - but in my case it didn't.
  • You can run consul monitor to check the logs of our cluster

Important notes about running consul:

If you look at the upstart script we are using to run consul you can see that the command we run is:

/usr/local/bin/consul agent \
    -data-dir="/tmp/consul" -ui -bind= -client= \
    -bootstrap-expect 3\
  • -client= is required because Swarm tries to communicate via the provided consul address (below) on port 8500. By default consul only uses the default loop-back address .. I don't actually know what that means .. but the effect that it has is that swarm cannot reach the consul server and therefore cannot assign a leader .. and basically it won't work. Providing -client means that swarm can communicate on <internal_ip>:8500. To Quote from someone smarter than me:

Finally, the client_addr line tells Consul to listen on all interfaces (not just loopback, which is the default).

Check out the consul UI

The upstart script that runs with our consul setup sets all the nodes to also support the consul UI. To view the UI locally, we'll tunnel through one of our servers:

# create the tunnel
ssh -N -f -L 8500:localhost:8500 root@

Now you can view the consul UI on your local machine at localhost:8500/ui

Now we have Consul installed and running, let's set up our swarm:

Provision the Swarm

Set up the master

First up, let's set up the swarm manager:


- hosts:
   - manager
    #$ docker run -d -p 4000:4000 swarm manage -H :4000 --replication --advertise consul://
    - name: Run consul manager
        name: swarm
        image: swarm
        command: "manage -H :4000 --replication --advertise {{private_ip}}:4000 consul://{{private_ip}}:8500"
        state: started
          - "4000:4000"
          - 4000
        - swarm


  • We're only running this on the manager node/s from our inventory (hosts: manager)

Run the playbook again:

ansible-playbook swarm.yml -i swarm_cluster

Now, if you log into the manager node and run docker ps, you should see that we have a swarm container running on our server:

root@manager:~# docker ps
CONTAINER ID        IMAGE               COMMAND                  CREATED             STATUS              PORTS               NAMES
3b753e4772b4        swarm               "/swarm manage -H :40"   34 seconds ago      Up 33 seconds       2375/tcp            swarm

Set up the replica

Now, our replicat is actually just another master node. Docker Swarm will handle which master acts like the master. As such, we just add the replica to our hosts list above

- hosts:
   - manager
   - replica

This deviates from the official tutorial, but I found that when trying it the way that the official tutorial works, I often got stuck. For example, my "manager" node would loose the master election and my replica node would become master. In the official tutorial, this is problematic because they do not expose port 4000 on the replica - therefore: one cannot communicate with the master node.


  • Note in each case we're using the {{private_ip}} for consul. This tutorial is slightly different from the official one in that we have installed consul on all our nodes. Because consul is clustered talking to consul on any node is the same.
  • See the note above about exposing the client_ip. Without -client= set up, our above swarm command will not be able to communicate with Consul on the provided IP. Further: it will also not be able to communicate with localhost because in that instance, localhost refers to localhost inside the docker container. This again deviates from the tutorial, however: without this change, the tutorial did not work for me.

Set up the node

To set up the node, add:

- hosts:
   - node
    #$ docker run -d swarm join --advertise= consul://        
    - name: Run consul node
        name: swarm
        image: swarm
        command: "join --advertise {{private_ip}}:4000 consul://{{private_ip}}:8500"
        state: started
        - swarm

You can now ssh into the node server and you should see that there is a docker swarm container running on there.

Note: The master tries to communicate with the node on port 2375. I needed to add: DOCKER_OPTS="-H tcp://" to my config file in /etc/default/docker. I then also needed to communicate with: -H 2375 when talking to the local docker instance.

Communicate with the Swarm

You now have a Docker Swarm running. Some things you can do:

Log into the swarm manager:

Check the status of the cluster:

docker -H :4000 info

Run a container on the cluster

docker -H :4000 run hello-world

Check which node the container ran on:

docker -H :4000 ps -a


Issues I had that appear not be be documented:

  • Swarm is unable to communicate with consul. Need to specify client= for consul agent
  • Docker master is unable to communicate with nodes on port 2375. Need to add DOCKER_OPTS="-H tcp://" to my config file in /etc/default/docker. This means that the normal docker .. command will no longer work on the node. You need to specify: docker -H :2375 ....

Next Steps

  • Get the swarm working with Docker Compose
  • Test the new (beta) on-node-failure re-scheduling feature
  • Test with rolling over an entire node (Chaos Monkey style)
  • Look into Registrator to automatically register the nodes with consul ... (or should that actually already be happening?)