Archive for December, 2015

Docker Swarm 1.0 with Multi-host Networking: Manual Setup

Jeff Nickoloff had a great Medium post recently about how to set up a Swarm 1.0 cluster using the multi-host networking features added in Docker 1.9. He uses Docker Machine’s built-in Swarm support to create a demonstration cluster in just a few minutes.

In this post, I’ll show how to recreate the same cluster manually — that is, without docker-machine provisioning. This is for advanced users who want to understand what Machine is doing under the hood.

First, let’s take a look at the layout of the cluster Jeff created in his post. There are four machines:

Swarm cluster topology

Topology of our Swarm cluster.

To approximate the provisioning that Machine is doing under the hood, we’ll use this Vagrantfile to provision four Ubuntu boxes:

Name   IP   Description
kv2   192.168.33.10   Consul (for both cluster discovery and networking)
c0-master   192.168.33.11   Swarm master
c0-n1   192.168.33.12   Swarm node 1
c0-n2   192.168.33.13   Swarm node 2

In the directory where you saved the Vagrantfile, run vagrant up. This will take 5-10 minutes, but at the end of the process you should have four running VMs with Docker 1.9 or higher running. Note how our Vagrant file starts each instance of Docker Engine (the docker daemon) with --cluster-store=consul://192.168.33.10:8500 and --cluster-advertise=eth1:2375. Those flags are the same ones Jeff passes to docker-machine using --engine-opt.

Because Docker networking requires a >= 3.16 kernel, we need to do one manual step on each machine to upgrade its kernel. Run these commands from your host shell prompt:

$ vagrant ssh -c "sudo apt-get install -y linux-image-generic-lts-utopic && sudo reboot" kv2
$ vagrant ssh -c "sudo apt-get install -y linux-image-generic-lts-utopic && sudo reboot" c0-master
$ vagrant ssh -c "sudo apt-get install -y linux-image-generic-lts-utopic && sudo reboot" c0-n1
$ vagrant ssh -c "sudo apt-get install -y linux-image-generic-lts-utopic && sudo reboot" c0-n2

(Jeff doesn’t have to do this in his tutorial because Machine provisions using an iso that contains a recent kernel.)

We’re now ready to set up a Consul key/value store just as Jeff did:

$ docker -H=tcp://192.168.33.10:2375 run -d -p 8500:8500 -h consul progrium/consul -server -bootstrap

Here’s how you manually start the swarm manager on the c0-master machine:

$ docker -H=tcp://192.168.33.11:2375 run -d -p 3375:2375 swarm manage consul://192.168.33.10:8500/

Next we start two swarm agent containers on nodes c0-n1 and c0-n2:

$ docker -H=tcp://192.168.33.12:2375 run -d swarm join --advertise=192.168.33.12:2375 consul://192.168.33.10:8500/
$ docker -H=tcp://192.168.33.13:2375 run -d swarm join --advertise=192.168.33.13:2375 consul://192.168.33.10:8500/

Let’s test the cluster:

$ docker -H=tcp://192.168.33.11:3375 info
$ docker -H=tcp://192.168.33.11:3375 run swarm list consul://192.168.33.10:8500/
$ docker -H=tcp://192.168.33.11:3375 run hello-world

Create the overlay network just as Jeff did:

$ docker -H=tcp://192.168.33.11:3375 network create -d overlay myStack1
$ docker -H=tcp://192.168.33.11:3375 network ls

Create the same two (nginx and alpine) containers that Jeff did:

$ docker -H=tcp://192.168.33.11:3375 run -d --name web --net myStack1 nginx
$ docker -H=tcp://192.168.33.11:3375 run -itd --name shell1 --net myStack1 alpine /bin/sh

And verify they can talk to each other just as Jeff did:

$ docker -H=tcp://192.168.33.11:3375 attach shell1
$ ping web
$ apk update && apk add curl
$ curl http://web/

You should find that shell1 is able to ping the nginx container, and vice-versa, just as was the case in Jeff’s tutorial.

Comments (18)