You are here

Use Vagrant + Ansible to get your Swarm Cluster test enviroment running

 Hi,

This is just a quick post as a follow up on the previous post:

Configure Docker Swarm Cluster with Consul Registry service

In that post we did the enviroment configuration by hand, here we are going to use Vagrant + ansible to make life easier, this is not and optimal Vagrant/ansible configuration it's just to get the  test lab ready in minutes, so you can spend your time with swarm:

So we are going to use vagrant to create our 6 Vms, and the use ansible to provision the servers.

First we create a dir to store our vagrant file:

[liquid@liquid-ibm:vagrant]$ mkdir /home/liquid/vagrant/swarm                                                                                                                    (10-03 10:01) 

We do a vagrant init to create the conf and vagranfile:

[liquid@liquid-ibm:vagrant/swarm]$vagrant init

[liquid@liquid-ibm:vagrant/swarm]$ ls -la                                                                                                                                                (10-03 10:02)
total 4
drwxrwxr-x 4 liquid liquid  136 Oct  2 23:10 .
drwxrwxr-x 3 liquid liquid   19 Oct  3 00:29 ..
drwxrwxr-x 4 liquid liquid   42 Oct  2 21:38 .vagrant
-rw-rw-r-- 1 liquid liquid 2894 Oct  2 23:10 Vagrantfile

Now I'm going to comment the Vagrantfile I used, by no means this is the best config, I prefered to configure each server in a static fashion, just for clarity:


[liquid@liquid-ibm:vagrant/swarm]$ cat Vagrantfile                                                                                                                                       (10-03 10:04)
# -*- mode: ruby -*-
# vi: set ft=ruby :

# All Vagrant configuration is done below. The "2" in Vagrant.configure
# configures the configuration version (we support older styles for
# backwards compatibility). Please don't change it unless you know what
# you're doing.
#
#

Vagrant.configure(2) do |config|    ----> here we start our vagrant config
  config.vm.box = "centos/7"        -----> we are going to use the centos/7 image/box

#here we start to configure each of our 6 vms:

  config.vm.define :docker_reg1 do |docker_reg1|
  	docker_reg1.vm.provider :libvirt do |domain|   ---------> here we select the libvirt provider, so we can specify the vms config
   		 domain.memory = 1024
   		 domain.cpus = 1
		 domain.storage :file, :size => '20G'   --------> this is a second 20Gig disk for the Vms
  	end
  docker_reg1.vm.hostname ="manager1"
  docker_reg1.vm.network "private_network", ip: "192.168.123.10"        ---------------------> first network is going to be the swarm public network
  docker_reg1.vm.network "private_network", ip: "10.0.1.5", netmask: "255.255.0.0"   --------> second network is going to be the swarm managment/private network
  end
# The rest of the config is the Same for the other 5 machines

  config.vm.define :docker_reg2 do |docker_reg2|
  	docker_reg2.vm.provider :libvirt do |domain|
   		 domain.memory = 1024
   		 domain.cpus = 1
		 domain.storage :file, :size => '20G'
  	end
  docker_reg2.vm.hostname ="manager2"
  docker_reg2.vm.network "private_network", ip: "192.168.123.11"
  docker_reg2.vm.network "private_network", ip: "10.0.2.5", netmask: "255.255.0.0"
  end

  config.vm.define :docker_reg3 do |docker_reg3|
  	docker_reg3.vm.provider :libvirt do |domain|
   		 domain.memory = 1024
   		 domain.cpus = 1
		 domain.storage :file, :size => '20G'
  	end
  docker_reg3.vm.hostname ="manager3"
  docker_reg3.vm.network "private_network", ip: "192.168.123.12"
  docker_reg3.vm.network "private_network", ip: "10.0.3.5", netmask: "255.255.0.0"
  end

  config.vm.define :docker_clu1 do |docker_clu1|
  	docker_clu1.vm.provider :libvirt do |domain|
   		 domain.memory = 1024
   		 domain.cpus = 1
		 domain.storage :file, :size => '20G'
  	end
  docker_clu1.vm.hostname ="host1"
  docker_clu1.vm.network "private_network", ip: "192.168.123.13"
  docker_clu1.vm.network "private_network", ip: "10.0.4.5", netmask: "255.255.0.0"
  end

  config.vm.define :docker_clu2 do |docker_clu2|
  	docker_clu2.vm.provider :libvirt do |domain|
   		 domain.memory = 1024
   		 domain.cpus = 1
		 domain.storage :file, :size => '20G'
  	end
  docker_clu2.vm.hostname ="host2"
  docker_clu2.vm.network "private_network", ip: "192.168.123.14"
  docker_clu2.vm.network "private_network", ip: "10.0.5.5", netmask: "255.255.0.0"
  end

  config.vm.define :docker_clu3 do |docker_clu3|
  	docker_clu3.vm.provider :libvirt do |domain|
   		 domain.memory = 1024
   		 domain.cpus = 1
		 domain.storage :file, :size => '20G'
  	end
  docker_clu3.vm.hostname ="host3"
  docker_clu3.vm.network "private_network", ip: "192.168.123.15"
  docker_clu3.vm.network "private_network", ip: "10.0.6.5", netmask: "255.255.0.0"
  end
  # config.vm.synced_folder "../data", "/vagrant_data"

#And finally we select Ansible for provisioning, and we specify the yaml file with the ansible playbook:

config.vm.provision :ansible do |ansible|
  ansible.playbook = "docker.yml"
end
end

Thats our Vagrant file ready, now lets go to the ansible playbook file docker.yml , again this is just and example to get the work done, the ssh keys should be done in another fashion in a proper installation, because here all the 6 servers share the same rsa private key for the root user.

[liquid@liquid-ibm:vagrant/swarm]$cat docker.yml                                                                                                                                        (10-03 10:10)
---
- hosts: all
  become: yes            ---------------> we are going to use sudo
  become_user: root      ---------------> use sudo to become root user
  vars:
    sshd: sshd           --------------> no vars used in this example, but they should be use for portability
  tasks:
  - name: root user with ssh key     --------------> with the user module we create a ssh pub/priv keypair so we get the .ssh dir created
    user: name=root  generate_ssh_key=yes
  - name: Copy priv rsa key            -------------> here is were we copy the pub/priv key from our ansible host(not recomended by any means)
    copy: src=/home/liquid/.ssh/id_rsa dest=/root/.ssh/id_rsa owner=root group=root mode=0600
  - name: Copy pub rsa key
    copy: src=/home/liquid/.ssh/id_rsa.pub dest=/root/.ssh/id_rsa.pub owner=root group=root mode=0644
  - name: disable selinux             --------------> we disable selinux to make are life easier with the selinux module
    selinux: policy=targeted state=permissive



  - name: upgrade all packages          -----------> update all the packages in the system
    yum: name=* state=latest
  - name: Install Docker
    yum: name=docker state=latest
  - name: Install telnet
    yum: name=telnet state=latest
  - name: Install Ip tools
    yum: name=net-tools state=latest
  - name: Copy Docker Config          --------------> here we copy a modified docker-storage-setup file, so that it uses disk lvm with disk vdb for the docker storate/volumes
    copy: src=/home/liquid/vagrant/swarm/docker-storage-setup dest=/usr/lib/docker-storage-setup/docker-storage-setup owner=root group=root mode=0754
  - name: Copy /etc/hosts             -------------> this is a hosts file with all the Vms ips
    copy: src=/home/liquid/vagrant/swarm/hosts dest=/etc/hosts owner=root group=root mode=0644
  - name: Disable remote root login     ----------> here we permit root to login(again no recomended real world..)
    lineinfile: dest="{{ sshd_config }}" regexp="^#?PermitRootLogin" line="PermitRootLogin yes"
    notify: restart sshd              -----------> here we are using the restart sshd handler, just for and example of handlers
  - name: Add Host key to Provisioned vms  --------------------> Here we are adding our ansible host pub key to the authorized_keys file of the root user
    authorized_key: user=root key="{{ lookup('file', '/home/liquid/.ssh/id_rsa.pub') }}"
  - name: Setup Storage       -----------------> here we run the docker-storage-setup command that read the previous docker-storage-setup file, and configures docker storage
    command: docker-storage-setup
  - name: Start the docker daemon      ----> finally we start the docker daemon, here we don't use a handler just so you can see it's not a must
    service: name=docker state=started enabled=yes
  handlers:
  - include: handlers/main.yml        ---------> finally we do and include of our handler files.


So lets show the contents of the files, first the handler:

[liquid@liquid-ibm:vagrant/swarm]$ cat handlers/main.yml                                                                                                                                 (10-03 10:02)
---
 - name: restart sshd    ---------> we use the service module, with a variable "{{ sshd }}" that was specified in the playbook file in the vars section
   service: name="{{ sshd }}" state=restarted

The hosts file:

[liquid@liquid-ibm:vagrant/swarm]$ cat hosts                                                                                                                                             (10-03 10:21)
# The following lines are desirable for IPv6 capable hosts
::1     localhost ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
ff02::3 ip6-allhosts

127.0.0.1		localhost.localdomain localhost
::1		localhost6.localdomain6 localhost6

###########Docker Registry hosts
192.168.123.10   manager1 manager1.liquid.zz
192.168.123.11   manager2 manager2.liquid.zz
192.168.123.12   manager3 manager3.liquid.zz
192.168.123.13   host1 host1.liquid.zz
192.168.123.14   host2 host2.liquid.zz
192.168.123.15   host3 host2.liquid.zz
10.0.1.5   manager1-priv manager1-priv.liquid.zz
10.0.2.5   manager2-priv manager2-priv.liquid.zz
10.0.3.5   manager3-priv manager3-priv.liquid.zz
10.0.4.5   host1-priv host1-priv.liquid.zz
10.0.5.5   host2-priv host2-priv.liquid.zz
10.0.6.5   host3-priv host2-priv.liquid.zz

[liquid@liquid-ibm:vagrant/swarm]$ cat docker-storage-setup | grep -v "^#" | sed '/^$/d'                                                                                                 (10-03 10:23)
STORAGE_DRIVER=devicemapper
DEVS=/dev/vdb
VG=dockervg
DATA_SIZE=20%FREE
MIN_DATA_SIZE=2G
CHUNK_SIZE=512K
GROWPART=false
AUTO_EXTEND_POOL=yes
POOL_AUTOEXTEND_THRESHOLD=80
POOL_AUTOEXTEND_PERCENT=20
DEVICE_WAIT_TIMEOUT=60
WIPE_SIGNATURES=false

Ok, with all this ready, we can run the vagrant up command and go for a coffee:

[liquid@liquid-ibm:vagrant/swarm]$ vagrant up --provider libvirt                                                                                                                         (10-03 10:24)
Bringing machine 'docker_reg1' up with 'libvirt' provider...
Bringing machine 'docker_reg2' up with 'libvirt' provider...
Bringing machine 'docker_reg3' up with 'libvirt' provider...
Bringing machine 'docker_clu1' up with 'libvirt' provider...
Bringing machine 'docker_clu2' up with 'libvirt' provider...
Bringing machine 'docker_clu3' up with 'libvirt' provider...
==> docker_reg1: Creating image (snapshot of base box volume).
==> docker_reg2: Creating image (snapshot of base box volume).
==> docker_reg3: Creating image (snapshot of base box volume).
==> docker_reg1: Creating domain with the following settings...
==> docker_reg1:  -- Name:              swarm_docker_reg1
==> docker_reg1:  -- Domain type:       kvm
==> docker_reg1:  -- Cpus:              1
==> docker_reg1:  -- Memory:            1024M
==> docker_reg2: Creating domain with the following settings...
==> docker_reg1:  -- Management MAC:    
==> docker_reg2:  -- Name:              swarm_docker_reg2
==> docker_clu2: Creating image (snapshot of base box volume).
==> docker_reg1:  -- Loader:            
==> docker_reg2:  -- Domain type:       kvm
==> docker_reg1:  -- Base box:          centos/7
==> docker_reg2:  -- Cpus:              1
==> docker_reg1:  -- Storage pool:      default
==> docker_reg2:  -- Memory:            1024M
==> docker_reg1:  -- Image:             /var/lib/libvirt/images/swarm_docker_reg1.img (41G)
==> docker_reg2:  -- Management MAC:    
==> docker_reg1:  -- Volume Cache:      default
==> docker_reg2:  -- Loader:            
==> docker_reg1:  -- Kernel:            
==> docker_reg2:  -- Base box:          centos/7
==> docker_reg1:  -- Initrd:            
==> docker_reg2:  -- Storage pool:      default
==> docker_reg1:  -- Graphics Type:     vnc
==> docker_reg2:  -- Image:             /var/lib/libvirt/images/swarm_docker_reg2.img (41G)
==> docker_reg1:  -- Graphics Port:     5900
==> docker_reg2:  -- Volume Cache:      default
==> docker_reg2:  -- Kernel:            
==> docker_reg1:  -- Graphics IP:       127.0.0.1
==> docker_reg1:  -- Graphics Password: Not defined
==> docker_reg2:  -- Initrd:            
==> docker_reg1:  -- Video Type:        cirrus
==> docker_reg3: Creating domain with the following settings...
==> docker_reg2:  -- Graphics Type:     vnc
==> docker_reg1:  -- Video VRAM:        9216
==> docker_reg3:  -- Name:              swarm_docker_reg3
==> docker_reg2:  -- Graphics Port:     5900
==> docker_reg1:  -- Keymap:            en-us
==> docker_reg3:  -- Domain type:       kvm
==> docker_reg2:  -- Graphics IP:       127.0.0.1
==> docker_reg1:  -- Disks:         vdb(qcow2,20G)
==> docker_reg3:  -- Cpus:              1
==> docker_reg2:  -- Graphics Password: Not defined
==> docker_reg1:  -- Disk(vdb):     /var/lib/libvirt/images/swarm_docker_reg1-vdb.qcow2
==> docker_reg3:  -- Memory:            1024M
==> docker_clu2: Creating domain with the following settings...
==> docker_reg2:  -- Video Type:        cirrus
==> docker_reg1:  -- INPUT:             type=mouse, bus=ps2


....................................

Once Vagrant finishes it's part, Ansible kicks in:

==> docker_reg1: Running provisioner: ansible...
    docker_reg1: Running ansible-playbook...
    docker_clu3: 
    docker_clu3: Vagrant insecure key detected. Vagrant will automatically replace
    docker_clu3: this with a newly generated keypair for better security.
==> docker_reg2: Rsyncing folder: /home/liquid/vagrant/swarm/ => /vagrant
    docker_clu1: 
    docker_clu1: Vagrant insecure key detected. Vagrant will automatically replace
    docker_clu1: this with a newly generated keypair for better security.

PLAY [all] *********************************************************************

TASK [setup] *******************************************************************
==> docker_reg2: Running provisioner: ansible...
    docker_reg2: Running ansible-playbook...
ok: [docker_reg1]

TASK [root user with ssh key] **************************************************

PLAY [all] *********************************************************************
    docker_clu3: 
    docker_clu3: Inserting generated public key within guest...

TASK [setup] *******************************************************************

................................


PLAY RECAP *********************************************************************
docker_clu1                : ok=15   changed=14   unreachable=0    failed=0   

changed: [docker_clu3]

TASK [Start the docker daemon] *************************************************
changed: [docker_clu3]

RUNNING HANDLER [restart sshd] *************************************************
changed: [docker_clu3]

PLAY RECAP *********************************************************************
docker_clu3                : ok=15   changed=14   unreachable=0    failed=0   


Ok, it has finished with no errors, lets check:

[liquid@liquid-ibm:vagrant/swarm]$ vagrant status                                                                                                                                        (10-03 10:30)
Current machine states:

docker_reg1               running (libvirt)
docker_reg2               running (libvirt)
docker_reg3               running (libvirt)
docker_clu1               running (libvirt)
docker_clu2               running (libvirt)
docker_clu3               running (libvirt)

Lets check if docker is ok:

[liquid@liquid-ibm:vagrant/swarm]$ vagrant ssh docker_reg1 -c "sudo  docker run hello-world"                                                                                             (10-03 10:34)
Unable to find image 'hello-world:latest' locally
Trying to pull repository docker.io/library/hello-world ... 
latest: Pulling from docker.io/library/hello-world
c04b14da8d14: Pull complete 
Digest: sha256:0256e8a36e2070f7bf2d0b0763dbabdd67798512411de4cdcf9431a1feb60fd9
Status: Downloaded newer image for docker.io/hello-world:latest

Hello from Docker!
This message shows that your installation appears to be working correctly.



Ok, so now we can continue with SWARM:



We are going to use docker containers with and image of the consul server provided by progrium, we are going to run 1 container in each physical host:

Before we start, on the host servers  the ones that are going to be running the containers inside the swarm cluster, we need to make the docker daemon listen on a tcp port, so we can connec to the daemon from the swarm managers, by default the docker daemon on listens on a local unix socket, on a production enviroment you should secure/TLS all the comunications, between the register,swarm manager,container hosts, and cli clients, otherwise all conection info is non-encrypted and open for all to poke at.

So on nodes host1,2,3 we to get the daemon listening on port 2375 the standard non tls port, we modify the file /etc/sysconfig/docker and add the "-H unix:///var/run/docker.sock -H tcp://0.0.0.0:2375" :

[root@host3 ~]# cat /etc/sysconfig/docker
# /etc/sysconfig/docker

# Modify these options if you want to change the way the docker daemon runs
OPTIONS='--selinux-enabled --log-driver=journald -H unix:///var/run/docker.sock -H tcp://0.0.0.0:2375'
DOCKER_CERT_PATH=/etc/docker

Restart the docker daemon and we are ready to rock.


Now on to the managment servers, where we are going to install ,the consul registry and the swarm managers:

on the first managment node we set -bootstrap-expect, so it expects 3 nodes in the cluster

[root@docker1 ~]#  docker run --restart=unless-stopped -d -h consul1 --name consul1 -v /mnt/:/data -p 192.168.124.10:8300:8300 -p 192.168.124.10:8301:8301 -p 192.168.124.10:8301:8301/udp -p 192.168.124.10:8302:8302 -p 192.168.124.10:8400:8400 -p 192.168.124.10:8500:8500 -p 192.168.123.10:53:53/udp progrium/consul -server -advertise 192.168.124.10 -bootstrap-expect 3
495e7bad5c8354495e059a52834a8775d4c9d3a1b7ec043dae944daf5af65307

And in the other 2 we use the join command:

[root@docker2 ~]# docker run --restart=unless-stopped -d -h consul2 --name consul2 -v /mnt/:/data -p 192.168.124.11:8300:8300 -p 192.168.124.11:8301:8301 -p 192.168.124.11:8301:8301/udp -p 192.168.124.11:8302:8302 -p 192.168.124.11:8400:8400 -p 192.168.124.11:8500:8500 -p 192.168.123.11:53:53/udp progrium/consul -server -advertise 192.168.124.11 -join 192.168.124.10
495e7bad5c8354495e059a52834a8775d4c9d3a1b7ec043dae944daf5af65301
[root@docker3 ~]# docker run --restart=unless-stopped -d -h consul3 --name consul3 -v /mnt/:/data -p 192.168.124.12:8300:8300 -p 192.168.124.12:8301:8301 -p 192.168.124.12:8301:8301/udp -p 192.168.124.12:8302:8302 -p 192.168.124.12:8400:8400 -p 192.168.124.12:8500:8500 -p 192.168.123.12:53:53/udp progrium/consul -server -advertise 192.168.124.12 -join 192.168.124.10
f1ac1ee6724f453456af7d06f0b2185b240239cd6b316478f7c8e053ab2a091a

We are going to check consul is running and the 3 nodes are registered in the cluster:

[root@docker1 ~]# docker exec -it consul1 /bin/bash
bash-4.3# consul members
Node     Address              Status  Type    Build  Protocol  DC
consul3  192.168.124.12:8301  alive   server  0.5.2  2         dc1
consul1  192.168.124.10:8301  alive   server  0.5.2  2         dc1
consul2  192.168.124.11:8301  alive   server  0.5.2  2         dc1

Now lets get running our swarm manager cluster, we go back to docker1 host and run

[root@docker1 ~]# docker run --restart=unless-stopped -h mgr1 --name mgr1 -d -p 3375:2375 swarm manage --replication --advertise 192.168.124.10:3375 consul://192.168.124.10:8500/
a37c024ecbcc1c525d9c2f6b7cbe72169fa5f9840453343f0a570743cb67134a
[root@docker1 ~]# docker ps
CONTAINER ID        IMAGE               COMMAND                  CREATED             STATUS              PORTS                                                                                                                                                                               NAMES
a37c024ecbcc        progrium/consul     "/bin/start -server -"   2 minutes ago       Up 2 minutes        192.168.124.10:8300-8302->8300-8302/tcp, 192.168.124.10:8400->8400/tcp, 53/tcp, 192.168.123.10:53->53/udp, 192.168.124.10:8500->8500/tcp, 192.168.124.10:8301->8301/udp, 8302/udp   consul1
c24ca28e922c        swarm               "/swarm manage --repl"   3 minutes ago       Up 3 minutes        0.0.0.0:3375->2375/tcp                                                                                                                                                              mgr1
Check the logs to see if leadership is aquired:

time="2016-09-29T08:11:39Z" level=error msg="Leader Election: watch leader channel closed, the store may be unavailable..." 
time="2016-09-29T08:11:49Z" level=info msg="Leader Election: Cluster leadership lost" 
time="2016-09-29T08:11:49Z" level=info msg="Leader Election: Cluster leadership acquired" 

We now do the same on the other nodes:

[root@docker2 ~]# docker run --restart=unless-stopped -h mgr2 --name mgr2 -d -p 3375:2375 swarm manage --replication --advertise 192.168.124.11:3375 consul://192.168.124.11:8500/
3a1a8dcbef32eb6819fd3c018c32576c346244761b396c37fb2c3064520fd404
[root@docker3 ~]# docker run --restart=unless-stopped -h mgr3 --name mgr3 -d -p 3375:2375 swarm manage --replication --advertise 192.168.124.12:3375 consul://192.168.124.12:8500
3a1a8dcbef32eb6819fd3c018c32576c346244761b396c37fb2c3064520fd405


Ok, so we have our swarm manager cluster ready, now we are going to add nodes to the swarm cluster, this nodes will contain the running docker containers.

First we are going to add a consul agent to all nodes:

[root@docker4 ~]# docker run --restart=unless-stopped -d -h consul-agt1 --name consul-agt1  -p 8300:8300 -p 8301:8301 -p 8301:8301/udp -p 8302:8302 -p 8302:8302/udp -p 8400:8400 -p 8500:8500 -p 8600:8600/udp progrium/consul -rejoin -advertise 192.168.124.13 -join 192.168.124.10
5a9ab6abc12166407aac0fb22f3df00b491cffb22bbe253016b4ca6a5479c146
[root@docker4 ~]# docker logs consul-agt1
==> WARNING: It is highly recommended to set GOMAXPROCS higher than 1
==> Starting Consul agent...
==> Starting Consul agent RPC...
==> Joining cluster...
    Join completed. Synced with 1 initial agents
==> Consul agent running!
         Node name: 'consul-agt1'
        Datacenter: 'dc1'
            Server: false (bootstrap: false)
       Client Addr: 0.0.0.0 (HTTP: 8500, HTTPS: -1, DNS: 53, RPC: 8400)
      Cluster Addr: 192.168.124.13 (LAN: 8301, WAN: 8302)
    Gossip encrypt: false, RPC-TLS: false, TLS-Incoming: false
             Atlas: 
==> Log data will now stream in as it occurs:
    2016/09/30 07:49:57 [INFO] serf: EventMemberJoin: consul-agt1 192.168.124.13
    2016/09/30 07:49:57 [INFO] agent: (LAN) joining: [192.168.124.10]
    2016/09/30 07:49:57 [INFO] serf: EventMemberJoin: consul1 192.168.124.10
    2016/09/30 07:49:57 [INFO] serf: EventMemberJoin: consul3 192.168.124.12
    2016/09/30 07:49:57 [INFO] serf: EventMemberJoin: consul2 192.168.124.11
    2016/09/30 07:49:57 [INFO] agent: (LAN) joined: 1 Err: 
    2016/09/30 07:49:57 [ERR] agent: failed to sync remote state: No known Consul servers
    2016/09/30 07:49:57 [INFO] consul: adding server consul1 (Addr: 192.168.124.10:8300) (DC: dc1)
    2016/09/30 07:49:57 [INFO] consul: adding server consul3 (Addr: 192.168.124.12:8300) (DC: dc1)
    2016/09/30 07:49:57 [INFO] consul: adding server consul2 (Addr: 192.168.124.11:8300) (DC: dc1)

We just have to change the docker name and the advertise IP on the other 2 nodes:

[root@docker5 ~]# docker run --restart=unless-stopped -d -h consul-agt2 --name consul-agt2  -p 8300:8300 -p 8301:8301 -p 8301:8301/udp -p 8302:8302 -p 8302:8302/udp -p 8400:8400 -p 8500:8500 -p 8600:8600/udp progrium/consul -rejoin -advertise 192.168.124.14 -join 192.168.124.10
[root@docker6 ~]# docker run --restart=unless-stopped -d -h consul-agt3 --name consul-agt3  -p 8300:8300 -p 8301:8301 -p 8301:8301/udp -p 8302:8302 -p 8302:8302/udp -p 8400:8400 -p 8500:8500 -p 8600:8600/udp progrium/consul -rejoin -advertise 192.168.124.15 -join 192.168.124.10

The Advantage of installing a agent on the host, is that the agent is aware of all the available consul servers, and if one goes down he switches from one consul server to the other.

No lets join the nodes to the Swarm cluster, here we connect to the local consul agent we just spin up before, this way we have HA on the consul service, because the local agent has the knowledge of the cluster topology, and can switch between the diferent consul nodes.

[root@docker4 ~]# docker run -d -h join --name join swarm join --advertise 192.168.124.13:2375 consul://192.168.124.13:8500/
[root@docker5 ~]# docker run -d -h join --name join swarm join --advertise 192.168.124.14:2375 consul://192.168.124.14:8500/
[root@docker6 ~]# docker run -d -h join --name join swarm join --advertise 192.168.124.15:2375 consul://192.168.124.15:8500/

Ok, so if all went well, we have our inital work done, lets check if all looks ok:

[root@docker1 ~]# docker exec -it consul1 /bin/bash
bash-4.3# consul members
Node         Address              Status  Type    Build  Protocol  DC
consul-agt3  192.168.124.15:8301  alive   client  0.5.2  2         dc1
consul1      192.168.124.10:8301  alive   server  0.5.2  2         dc1
consul2      192.168.124.11:8301  alive   server  0.5.2  2         dc1
consul3      192.168.124.12:8301  alive   server  0.5.2  2         dc1
consu-agt1   192.168.124.13:8301  failed  client  0.5.2  2         dc1 --------> this is a typo I made, lets get rid of this client.
consul-agt1  192.168.124.13:8301  alive   client  0.5.2  2         dc1
consul-agt2  192.168.124.14:8301  alive   client  0.5.2  2         dc1

bash-4.3# consul leave consu-agt1
Graceful leave complete

bash-4.3# consul members                                                                                                                                                                              
Node         Address              Status  Type    Build  Protocol  DC
consul-agt3  192.168.124.15:8301  alive   client  0.5.2  2         dc1
consul-agt2  192.168.124.14:8301  alive   client  0.5.2  2         dc1
consul-agt1  192.168.124.13:8301  alive   client  0.5.2  2         dc1
consul2      192.168.124.11:8301  alive   server  0.5.2  2         dc1
consul3      192.168.124.12:8301  alive   server  0.5.2  2         dc1
consul1      192.168.124.10:8301  alive   server  0.5.2  2         dc1

Ok, that is better, we have our 3 servers, and 3 agents on the nodes.

We can also check info on the consul server via the rest API, check it out:

[root@docker1 ~]# curl http://192.168.124.10:8500/v1/catalog/nodes | python -m json.tool
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100   338  100   338    0     0  78060      0 --:--:-- --:--:-- --:--:-- 84500
[
    {
        "Address": "192.168.124.13",
        "Node": "consu-agt1"
    },
    {
        "Address": "192.168.124.13",
        "Node": "consul-agt1"
    },
    {
        "Address": "192.168.124.14",
        "Node": "consul-agt2"
    },
    {
        "Address": "192.168.124.15",
        "Node": "consul-agt3"
    },
    {
        "Address": "192.168.124.10",
        "Node": "consul1"
    },
    {
        "Address": "192.168.124.11",
        "Node": "consul2"
    },
    {
        "Address": "192.168.124.12",
        "Node": "consul3"
    }
]

Ok, so if we check the services registered we can see there is only consul:

[root@docker1 ~]# curl http://192.168.124.10:8500/v1/catalog/services | python -m json.tool
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100    13  100    13    0     0   3664      0 --:--:-- --:--:-- --:--:--  4333
{
    "consul": []
}

We need to get the registration service running so each time a new container spins up it gets registered in the cluster, for this we need another container..., we have to have it running in all nodes in the cluster:

[root@docker1 ~]# docker run -d --name registrator -h registrator -v /var/run/docker.sock:/tmp/docker.sock gliderlabs/registrator:latest consul://192.168.124.10:8500
Unable to find image 'gliderlabs/registrator:latest' locally

If we now check again, the swarm cluster, has been registered:

[root@docker1 ~]# curl http://192.168.124.10:8500/v1/catalog/services | python -m json.tool
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100   134  100   134    0     0  35189      0 --:--:-- --:--:-- --:--:-- 44666
{
    "consul": [],
    "consul-53": [
        "udp"
    ],
    "consul-8300": [],
    "consul-8301": [
        "udp"
    ],
    "consul-8302": [],
    "consul-8400": [],
    "consul-8500": [],
    "swarm": []
}

Lets run the registry docker in all nodes:

[root@docker2 ~]# docker run -d --name registrator -h registrator -v /var/run/docker.sock:/tmp/docker.sock gliderlabs/registrator:latest consul://192.168.124.11:8500
[root@docker3 ~]# docker run -d --name registrator -h registrator -v /var/run/docker.sock:/tmp/docker.sock gliderlabs/registrator:latest consul://192.168.124.12:8500
[root@docker4 ~]# docker run -d --name registrator -h registrator -v /var/run/docker.sock:/tmp/docker.sock gliderlabs/registrator:latest consul://192.168.124.13:8500
[root@docker5 ~]# docker run -d --name registrator -h registrator -v /var/run/docker.sock:/tmp/docker.sock gliderlabs/registrator:latest consul://192.168.124.14:8500
[root@docker6 ~]# docker run -d --name registrator -h registrator -v /var/run/docker.sock:/tmp/docker.sock gliderlabs/registrator:latest consul://192.168.124.15:8500

Ok so if we drill in to the swarm service we can see our 3 swarm manager nodes:

[root@docker1 ~]# curl http://192.168.124.11:8500/v1/catalog/service/swarm | python -m json.tool
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100   487  100   487    0     0   199k      0 --:--:-- --:--:-- --:--:--  237k
[
    {
        "Address": "192.168.124.10",
        "Node": "consul1",
        "ServiceAddress": "",
        "ServiceID": "registrator:mgr1:2375",
        "ServiceName": "swarm",
        "ServicePort": 3375,
        "ServiceTags": null
    },
    {
        "Address": "192.168.124.10",
        "Node": "consul1",
        "ServiceAddress": "",
        "ServiceID": "registrator:mgr2:2375",
        "ServiceName": "swarm",
        "ServicePort": 3375,
        "ServiceTags": null
    },
    {
        "Address": "192.168.124.12",
        "Node": "consul3",
        "ServiceAddress": "",
        "ServiceID": "registrator:mgr3:2375",
        "ServiceName": "swarm",
        "ServicePort": 3375,
        "ServiceTags": null
    }
]

So this is ready, lets just check to run a new container in a host, and see if it get registered:

[root@docker4 ~]# docker run -d --name web1 -p 80:80 nginx

Then we can check if it gets registered:

[root@docker1 ~]# curl http://192.168.124.11:8500/v1/catalog/services | python -m json.tool
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100   175  100   175    0     0  81244      0 --:--:-- --:--:-- --:--:-- 87500
{
    "consul": [],
    "consul-53": [
        "udp"
    ],
    "consul-8300": [],
    "consul-8301": [
        "udp"
    ],
    "consul-8302": [
        "udp"
    ],
    "consul-8400": [],
    "consul-8500": [],
    "consul-8600": [
        "udp"
    ],
    "nginx-80": [],
    "swarm": []
}
[root@docker1 ~]# curl http://192.168.124.11:8500/v1/catalog/service/nginx-80 | python -m json.tool
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100   166  100   166    0     0  56309      0 --:--:-- --:--:-- --:--:-- 83000
[
    {
        "Address": "192.168.124.13",
        "Node": "consul-agt1",
        "ServiceAddress": "",
        "ServiceID": "registrator:web1:80",
        "ServiceName": "nginx-80",
        "ServicePort": 80,
        "ServiceTags": null
    }
]


Great!, so the basic steps on the configuration of the swarm cluster a ready.

Now lets configure our docker CLI to connect to the swarm cluster, as easy as using the docker host variable:

[root@liquid-ibm ~]# export DOCKER_HOST=192.168.124.10:3375
[root@liquid-ibm ~]# docker version
Client:
 Version:      1.12.1
 API version:  1.24
 Go version:   go1.6.3
 Git commit:   23cf638
 Built:        
 OS/Arch:      linux/amd64

Server:
 Version:      swarm/1.2.5
 API version:  1.22
 Go version:   go1.5.4
 Git commit:   27968ed
 Built:        Thu Aug 18 23:10:29 UTC 2016
 OS/Arch:      linux/amd64


[root@manager1 ~]# docker info
Containers: 9
 Running: 9
 Paused: 0
 Stopped: 0
Images: 9
Server Version: swarm/1.2.5
Role: primary
Strategy: spread
Filters: health, port, containerslots, dependency, affinity, constraint
Nodes: 3
 host1: 10.0.4.5:2375
  └ ID: VYF2:T77K:W6Y4:2HAH:C63L:NOW2:OTVA:3H6G:IYIJ:KKIH:UFAN:7EL4
  └ Status: Healthy
  └ Containers: 3 (3 Running, 0 Paused, 0 Stopped)
  └ Reserved CPUs: 0 / 1
  └ Reserved Memory: 0 B / 1.018 GiB
  └ Labels: executiondriver=native-0.2, kernelversion=3.10.0-327.28.3.el7.x86_64, operatingsystem=CentOS Linux 7 (Core), storagedriver=devicemapper
  └ UpdatedAt: 2016-10-02T22:07:21Z
  └ ServerVersion: 1.10.3
 host2: 10.0.5.5:2375
  └ ID: RFUE:OTN7:2JN2:GTME:B26G:SWUS:6OOZ:Y7LC:74F5:NPYS:ZTDQ:3PPS
  └ Status: Healthy
  └ Containers: 3 (3 Running, 0 Paused, 0 Stopped)
  └ Reserved CPUs: 0 / 1
  └ Reserved Memory: 0 B / 1.018 GiB
  └ Labels: executiondriver=native-0.2, kernelversion=3.10.0-327.28.3.el7.x86_64, operatingsystem=CentOS Linux 7 (Core), storagedriver=devicemapper
  └ UpdatedAt: 2016-10-02T22:07:01Z
  └ ServerVersion: 1.10.3
 host3: 10.0.6.5:2375
  └ ID: 4CAW:2MNR:4RHE:RZQS:KRKF:4PMY:CIDL:NX2E:2WBR:V3YG:F2YV:DBEE
  └ Status: Healthy
  └ Containers: 3 (3 Running, 0 Paused, 0 Stopped)
  └ Reserved CPUs: 0 / 1
  └ Reserved Memory: 0 B / 1.018 GiB
  └ Labels: executiondriver=native-0.2, kernelversion=3.10.0-327.28.3.el7.x86_64, operatingsystem=CentOS Linux 7 (Core), storagedriver=devicemapper
  └ UpdatedAt: 2016-10-02T22:07:11Z
  └ ServerVersion: 1.10.3
Plugins: 
 Volume: 
 Network: 
Kernel Version: 3.10.0-327.28.3.el7.x86_64
Operating System: linux
Architecture: amd64
Number of Docker Hooks: 2
CPUs: 3
Total Memory: 3.054 GiB
Name: mgr1
Registries: 


We can also check with PS all the containers running inside the cluster:

[root@manager1 ~]# docker ps
CONTAINER ID        IMAGE                           COMMAND                  CREATED             STATUS              PORTS                                                                                                                                                             NAMES
2f652bc13dcb        gliderlabs/registrator:latest   "/bin/registrator con"   15 minutes ago      Up 15 minutes                                                                                                                                                                         host1/registrator
68a1b81f93e0        gliderlabs/registrator:latest   "/bin/registrator con"   43 minutes ago      Up 12 minutes                                                                                                                                                                         host3/registrator
31506334e797        gliderlabs/registrator:latest   "/bin/registrator con"   44 minutes ago      Up 13 minutes                                                                                                                                                                         host2/registrator
082bb48e46d0        progrium/consul                 "/bin/start -rejoin -"   46 minutes ago      Up 12 minutes       10.0.6.5:8300-8302->8300-8302/tcp, 10.0.6.5:8400->8400/tcp, 10.0.6.5:8301-8302->8301-8302/udp, 53/tcp, 53/udp, 10.0.6.5:8500->8500/tcp, 10.0.6.5:8600->8600/udp   host3/consul-agt3
75e7f5c005c8        progrium/consul                 "/bin/start -rejoin -"   47 minutes ago      Up 13 minutes       10.0.5.5:8300-8302->8300-8302/tcp, 10.0.5.5:8400->8400/tcp, 10.0.5.5:8301-8302->8301-8302/udp, 53/tcp, 53/udp, 10.0.5.5:8500->8500/tcp, 10.0.5.5:8600->8600/udp   host2/consul-agt2
a2614145aa81        progrium/consul                 "/bin/start -rejoin -"   47 minutes ago      Up 16 minutes       10.0.4.5:8300-8302->8300-8302/tcp, 10.0.4.5:8400->8400/tcp, 10.0.4.5:8301-8302->8301-8302/udp, 53/tcp, 53/udp, 10.0.4.5:8500->8500/tcp, 10.0.4.5:8600->8600/udp   host1/consul-agt1


No we can test it out, lets create 6 ubuntu containers:

[root@manager1 ~]# for i in 1 2 3 4 5 6 ; do docker run -dit ubuntu /bin/bash; done
441859de3426dc549d7d725f856541308a8a53cec59a741676cec944d2885232
73bcfc6b0673033f62c169529ea84b0b90b4c6592ded6704b473379da25f9f3c
40387951fa62cc437e03936519d7ae7714117957c9902c7cf504702f61976c1f
b7f2d61ab7df316c59232116518a4540962d9dbca0d3570c2b99c0b09096c776
fec5f5670f21b369eb1836cc1b778522dcea0cb55eb8fa295183140dbb12aa33
caa4e9ad5bb2f76fe0dddf7c58b57546a3d63fcc9ff0560cbd53898e41dc441c

[root@manager1 ~]# docker ps
CONTAINER ID        IMAGE                           COMMAND                  CREATED              STATUS              PORTS                                                                                                                                                             NAMES
caa4e9ad5bb2        ubuntu                          "/bin/bash"              About a minute ago   Up About a minute                                                                                                                                                                     host2/determined_euler
fec5f5670f21        ubuntu                          "/bin/bash"              About a minute ago   Up About a minute                                                                                                                                                                     host1/grave_rosalind
b7f2d61ab7df        ubuntu                          "/bin/bash"              About a minute ago   Up About a minute                                                                                                                                                                     host2/angry_pike
40387951fa62        ubuntu                          "/bin/bash"              About a minute ago   Up About a minute                                                                                                                                                                     host3/serene_yonath
73bcfc6b0673        ubuntu                          "/bin/bash"              About a minute ago   Up About a minute                                                                                                                                                                     host1/backstabbing_carson
441859de3426        ubuntu                          "/bin/bash"              About a minute ago   Up About a minute                                                                                                                                                                     host3/prickly_goldstine

Because we are using the spread strategy our containers are getting spread among all the hosts in the cluster.


So memory reservation, we can check out, how memory reservation affects the selected strategy, if not enough memory is available in the host it get removed from the filter:

[vagrant@manager1 ~]$ docker run -d -m 800m nginx
9da32ecf0e1cb63e29312f1ea222144c2095603cd4c0692e45e0d36177898364
[vagrant@manager1 ~]$ docker info | grep -E '(host|Containers|Memory)'
Containers: 10
 host1: 10.0.4.5:2375
  └ Containers: 4 (4 Running, 0 Paused, 0 Stopped)
  └ Reserved Memory: 800 MiB / 1.018 GiB
 host2: 10.0.5.5:2375
  └ Containers: 3 (3 Running, 0 Paused, 0 Stopped)
  └ Reserved Memory: 0 B / 1.018 GiB
 host3: 10.0.6.5:2375
  └ Containers: 3 (3 Running, 0 Paused, 0 Stopped)
  └ Reserved Memory: 0 B / 1.018 GiB
[vagrant@manager1 ~]$ docker run -d -m 800m nginx
c2ad2b316ec47b235d6d65e68bc6872fca493debb2ac985b4d59f22539f09a62
[vagrant@manager1 ~]$ docker info | grep -E '(host|Containers|Memory)'
Containers: 11
 host1: 10.0.4.5:2375
  └ Containers: 4 (4 Running, 0 Paused, 0 Stopped)
  └ Reserved Memory: 800 MiB / 1.018 GiB
 host2: 10.0.5.5:2375
  └ Containers: 4 (4 Running, 0 Paused, 0 Stopped)
  └ Reserved Memory: 800 MiB / 1.018 GiB
 host3: 10.0.6.5:2375
  └ Containers: 3 (3 Running, 0 Paused, 0 Stopped)
  └ Reserved Memory: 0 B / 1.018 GiB
Total Memory: 3.054 GiB

Because host1 and host 2 don't have enough space for a 250meg container, they all get scheduled to host3:


[vagrant@manager1 ~]$ docker run -d -m 250m nginx
7b28f9fbd7680e3593a6d6414ef35725d6e0037429718f18ade35ea5363dcb2c
[vagrant@manager1 ~]$ docker run -d -m 250m nginx
66eaee36612885237d25dd5742f6e427a3c7dde6566e2404a863c9caee9cb802
[vagrant@manager1 ~]$ docker run -d -m 250m nginx
00341da448307e46b3b84a96e1d64d439b2a9c93b9d98a0dce02242d97684947
[vagrant@manager1 ~]$ docker info | grep -E '(host|Containers|Memory)'
Containers: 14
 host1: 10.0.4.5:2375
  └ Containers: 4 (4 Running, 0 Paused, 0 Stopped)
  └ Reserved Memory: 800 MiB / 1.018 GiB
 host2: 10.0.5.5:2375
  └ Containers: 4 (4 Running, 0 Paused, 0 Stopped)
  └ Reserved Memory: 800 MiB / 1.018 GiB
 host3: 10.0.6.5:2375
  └ Containers: 6 (6 Running, 0 Paused, 0 Stopped)
  └ Reserved Memory: 750 MiB / 1.018 GiB
Total Memory: 3.054 GiB

Once we reach the max reservation, we get and error:

[vagrant@manager1 ~]$ docker run -d -m 250m nginx
629ec0e307717f60c34eb78c79965232f1577888898dae4421c3494480256670
[vagrant@manager1 ~]$ docker run -d -m 250m nginx
b4612a3b12b5524de470c00914c3a3406a057fdfdb5a0148320c58d964a7d569
[vagrant@manager1 ~]$ docker run -d -m 250m nginx
18b4ed1a3d98766281c07e7028cd180a1b2c3e90deff9b2bcf0ea6e3f8dd5420
[vagrant@manager1 ~]$ docker run -d -m 250m nginx
docker: Error response from daemon: no resources available to schedule container.

Lets clean up:

[vagrant@manager1 ~]$ docker ps -a | grep nginx | awk ' { print $1 }' | xargs docker rm -f 
18b4ed1a3d98
b4612a3b12b5
629ec0e30771
00341da44830
66eaee366128
7b28f9fbd768
c2ad2b316ec4
9da32ecf0e1c

Ok lets do just a quick example of affinity filters:

[vagrant@manager1 ~]$ docker run -d --name web1 nginx
f15f13cc4d936e04007fe3346ff853759f720f86e7d5b2205000966fedde0825
[vagrant@manager1 ~]$ docker ps | grep nginx
f15f13cc4d93        nginx                           "nginx -g 'daemon off"   10 seconds ago      Up 9 seconds        80/tcp, 443/tcp                                                                                                                                                   host1/web1

We created a nginx container called web1, now lets apply the "container" affinity filter passed as a variable:

[vagrant@manager1 ~]$ docker run -d -e affinity:container==web1 nginx
f84f591368c43d29b10feee5c25e2a355e5336187c6eda27fa4263a162585968
[vagrant@manager1 ~]$ docker run -d -e affinity:container==web1 nginx
b4da5269c4aa887e31d0778069fb915f97ff0a6dfd3294027b2a83bfb156451c
[vagrant@manager1 ~]$ docker ps | grep nginx
b4da5269c4aa        nginx                           "nginx -g 'daemon off"   2 seconds ago        Up 1 seconds        80/tcp, 443/tcp                                                                                                                                                   host1/naughty_turing
f84f591368c4        nginx                           "nginx -g 'daemon off"   About a minute ago   Up About a minute   80/tcp, 443/tcp                                                                                                                                                   host1/focused_bhabha
f15f13cc4d93        nginx                           "nginx -g 'daemon off"   2 minutes ago        Up 2 minutes        80/tcp, 443/tcp                                                                                                                                                   host1/web1

As you can see all the new containers are getting created in host1 where our web1 container resides, let's do a !, so it uses hosts where container web1 is NOT running:

[vagrant@manager1 ~]$ docker run -d -e affinity:container!=web1 nginx
df7d04adb2b5955e7e66af5731382f196153c0144f290607b56a7fb3727c4874
[vagrant@manager1 ~]$ docker run -d -e affinity:container!=web1 nginx
4b1546e613650741cfc3d0f379313c2c616a3c64798ce30c621b85c665b2427b
[vagrant@manager1 ~]$ docker run -d -e affinity:container!=web1 nginx
c500155ed95ad3571031449d75ced105bb4463df934b966e629e54b874c9bc7e
[vagrant@manager1 ~]$ docker run -d -e affinity:container!=web1 nginx
45f72a92aee17d7518d650039466a47643bc1c67a103494a95e7eb2faedddc33
[vagrant@manager1 ~]$ docker ps | grep nginx
45f72a92aee1        nginx                           "nginx -g 'daemon off"   2 seconds ago        Up Less than a second   80/tcp, 443/tcp                                                                                                                                                   host3/ecstatic_turing
c500155ed95a        nginx                           "nginx -g 'daemon off"   3 seconds ago        Up 2 seconds            80/tcp, 443/tcp                                                                                                                                                   host2/fervent_wing
4b1546e61365        nginx                           "nginx -g 'daemon off"   16 seconds ago       Up 14 seconds           80/tcp, 443/tcp                                                                                                                                                   host3/goofy_euclid
df7d04adb2b5        nginx                           "nginx -g 'daemon off"   18 seconds ago       Up 16 seconds           80/tcp, 443/tcp                                                                                                                                                   host2/lonely_ritchie
c07559265cae        nginx                           "nginx -g 'daemon off"   About a minute ago   Up About a minute       80/tcp, 443/tcp                                                                                                                                                   host1/web1
If the affinity we have selected can't be match the run command fails:

[vagrant@manager1 ~]$ docker run -d -e affinity:container==web1000 nginx
docker: Error response from daemon: Unable to find a node that satisfies the following conditions 
[available container slots]
[container==web1000 (soft=false)].

web1000 doesn't exist so it cant satisfy and fails, we can tell it to do a best effort and even if it can't satisfy the request, continue to run:

[vagrant@manager1 ~]$ docker run -d -e affinity:container==~web1000 nginx
fc49003257197e20cc345d20d4d03862f8486d0861b16f58ee14b2a2fd889603
[vagrant@manager1 ~]$ 


We can also use standard constraints, so we can select from the labels of the host certain label like kernel version to run the container only on that kind of host for example:

[vagrant@manager1 ~]$ docker info | grep -E '(host|Labels)'
 host1: 10.0.4.5:2375
  └ Labels: executiondriver=native-0.2, kernelversion=3.10.0-327.28.3.el7.x86_64, operatingsystem=CentOS Linux 7 (Core), storagedriver=devicemapper
 host2: 10.0.5.5:2375
  └ Labels: executiondriver=native-0.2, kernelversion=3.10.0-327.28.3.el7.x86_64, operatingsystem=CentOS Linux 7 (Core), storagedriver=devicemapper
 host3: 10.0.6.5:2375
  └ Labels: executiondriver=native-0.2, kernelversion=3.10.0-327.28.3.el7.x86_64, operatingsystem=CentOS Linux 7 (Core), storagedriver=devicemapper

Here we have the same exact 3 hosts, but it could be that we have a ubuntu host, certain kernel,etc..

[vagrant@manager1 ~]$ docker run -d -e constraint:storagedriver==devicemapper nginx 
add257918ee82132a9e6dc1a21445e585d2cfdef12ab742c2e87e782cd5bbd5a

ok, lets check custom constraints, you can create labels to your nodes, this has to added on the run command of the docker daemon, so we need to modify the docker sysconfig file:

[root@host1 ~]# cat  /etc/sysconfig/docker | grep OPTION
OPTIONS='--selinux-enabled --log-driver=journald -H unix:///var/run/docker.sock -H tcp://0.0.0.0:2375i --label site=madrid --label zone=prod' 

We do the same on the other hosts and restart the docker daemon. then we check with docker info and see the new labels added:

[vagrant@manager1 ~]$ docker info | grep -E '(host|Labels)'
 host1: 10.0.4.5:2375
  └ Labels: executiondriver=native-0.2, kernelversion=3.10.0-327.28.3.el7.x86_64, operatingsystem=CentOS Linux 7 (Core), site=madrid, storagedriver=devicemapper, zone=prod
 host2: 10.0.5.5:2375
  └ Labels: executiondriver=native-0.2, kernelversion=3.10.0-327.28.3.el7.x86_64, operatingsystem=CentOS Linux 7 (Core), site=barcelona, storagedriver=devicemapper, zone=prod
 host3: 10.0.6.5:2375
  └ Labels: executiondriver=native-0.2, kernelversion=3.10.0-327.28.3.el7.x86_64, operatingsystem=CentOS Linux 7 (Core), site=gijon, storagedriver=devicemapper, zone=desarollo


So now, we can make our container run in our madrid site:

[vagrant@manager1 ~]$ docker ps | grep nginx
be8994dc8136        nginx                           "nginx -g 'daemon off"   4 seconds ago       Up 3 seconds        80/tcp, 443/tcp                                                                                                                                                   host1/nauseous_mcnulty
[vagrant@manager1 ~]$ docker run -d -e constraint:zone==desarollo nginx 
434f457f6794297ac0a2e971348daa7ce4a85fe48be0cc35d39130154dd80101
[vagrant@manager1 ~]$ docker ps | grep nginx
434f457f6794        nginx                           "nginx -g 'daemon off"   3 seconds ago       Up 2 seconds        80/tcp, 443/tcp                                                                                                                                                   host3/suspicious_pasteur
be8994dc8136        nginx                           "nginx -g 'daemon off"   19 seconds ago      Up 18 seconds       80/tcp, 443/tcp                                                                                                                                                   host1/nauseous_mcnulty


So this is clearly very powerfull...
Unix Systems: 

Add new comment

Filtered HTML

  • Web page addresses and e-mail addresses turn into links automatically.
  • Allowed HTML tags: <a> <em> <strong> <cite> <blockquote> <code> <ul> <ol> <li> <dl> <dt> <dd>
  • Lines and paragraphs break automatically.

Plain text

  • No HTML tags allowed.
  • Web page addresses and e-mail addresses turn into links automatically.
  • Lines and paragraphs break automatically.
By submitting this form, you accept the Mollom privacy policy.
Error | HP-UX Tips & Tricks Site

Error

Error message

  • Warning: Cannot modify header information - headers already sent by (output started at /homepages/37/d228974590/htdocs/includes/common.inc:2567) in drupal_send_headers() (line 1207 of /homepages/37/d228974590/htdocs/includes/bootstrap.inc).
  • PDOException: SQLSTATE[42000]: Syntax error or access violation: 1142 INSERT command denied to user 'dbo229817041'@'217.160.155.192' for table 'watchdog': INSERT INTO {watchdog} (uid, type, message, variables, severity, link, location, referer, hostname, timestamp) VALUES (:db_insert_placeholder_0, :db_insert_placeholder_1, :db_insert_placeholder_2, :db_insert_placeholder_3, :db_insert_placeholder_4, :db_insert_placeholder_5, :db_insert_placeholder_6, :db_insert_placeholder_7, :db_insert_placeholder_8, :db_insert_placeholder_9); Array ( [:db_insert_placeholder_0] => 0 [:db_insert_placeholder_1] => cron [:db_insert_placeholder_2] => Attempting to re-run cron while it is already running. [:db_insert_placeholder_3] => a:0:{} [:db_insert_placeholder_4] => 4 [:db_insert_placeholder_5] => [:db_insert_placeholder_6] => http://hpuxtips.es/?q=content/use-vagrant-ansible-get-your-swarm-cluster-test-enviroment-running [:db_insert_placeholder_7] => [:db_insert_placeholder_8] => 54.90.207.75 [:db_insert_placeholder_9] => 1513505343 ) in dblog_watchdog() (line 157 of /homepages/37/d228974590/htdocs/modules/dblog/dblog.module).
The website encountered an unexpected error. Please try again later.