Wednesday, 27 March 2019

Configure Docker Swarm Cluster on CentOS 7

Configure Docker Swarm Cluster on CentOS 7

Docker Swarm is the native clustering and scheduling tool for Docker containers. Current versions of Docker include Swarm mode for natively managing a cluster of Docker Engines. Docker Swarm clusters can be configured and managed using the same Docker-CLI commands.

In this article, we are configuring a Docker Swarm cluster on CentOS 7 based servers. We are using three nodes for our Docker Swarm cluster. One node as the Manager node and the other two as the Worker nodes.

Before move on any further, we clarify that this post follows a more practical approach to demostrate how things can be done, without diving into the theoritical nitty gritty. Therefore, we recommend you should read Docker Deep Dive for some basic to advance level understanding of the technology.

 

This Article Provides:

     

    System Specification:

    We have provisioned three identical virtual machines with CentOS 7.6 operating system and following specifications.

    Hostname: docker-manager-01 docker-worker-01 docker-worker-02
    IP Address: 192.168.116.150/24 192.168.116.151/24 192.168.116.152/24
    CPU: 3.4 Ghz (1 Core) 3.4 Ghz (1 Core) 3.4 Ghz (1 Core)
    Memory: 512 MB 512 MB 512 MB
    Storage: 40 GB 40 GB 40 GB
    Operating System: CentOS 7.6 CentOS 7.6 CentOS 7.6
    Docker Version: Docker CE 18.09 Docker CE 18.09 Docker CE 18.09

     

    Installing Docker Engine CE on CentOS 7:

    To run Docker in Swarm mode, we are required to install Docker Engine CE on each node.

    Connect with docker-manager-01 using ssh as root user. Execute following command to configure local DNS resolver.

    [root@docker-manager-01 ~]# cat >> /etc/hosts << EOF > 192.168.116.150 docker-manager-01.example.com docker-manager-01 > 192.168.116.151 docker-worker-01.example.com docker-worker-01 > 192.168.116.152 docker-worker-02.example.com docker-worker-02 > EOF

    Some of the required packages by Docker Engine CE are available in EPEL (Extra Packages for Enterprise Linux) yum repository. Therefore, we are installing EPEL yum repository before installing Docker Engine CE.

    [root@docker-manager-01 ~]# yum install -y epel-release.noarch Loaded plugins: fastestmirror Determining fastest mirrors * base: mirrors.ges.net.pk * extras: mirrors.ges.net.pk * updates: mirrors.ges.net.pk Resolving Dependencies --> Running transaction check ---> Package epel-release.noarch 0:7-11 will be installed --> Finished Dependency Resolution Dependencies Resolved ================================================================================ Package Arch Version Repository Size ================================================================================ Installing: epel-release noarch 7-11 extras 15 k Transaction Summary ================================================================================ Install 1 Package Total download size: 15 k Installed size: 24 k Downloading packages: epel-release-7-11.noarch.rpm | 15 kB 00:01 Running transaction check Running transaction test Transaction test succeeded Running transaction Installing : epel-release-7-11.noarch 1/1 Verifying : epel-release-7-11.noarch 1/1 Installed: epel-release.noarch 0:7-11 Complete!

    Install Docker yum repository for CentOS 7 as follows:

    [root@docker-manager-01 ~]# yum-config-manager --add-repo=https://download.docker.com/linux/centos/docker-ce.repo Loaded plugins: fastestmirror adding repo from: https://download.docker.com/linux/centos/docker-ce.repo grabbing file https://download.docker.com/linux/centos/docker-ce.repo to /etc/yum.repos.d/docker-ce.repo repo saved to /etc/yum.repos.d/docker-ce.repo

    Enable Docker-CE (Nightly) yum repository.

    [root@docker-manager-01 ~]# yum-config-manager --enable docker-ce-nightly Loaded plugins: fastestmirror =========================== repo: docker-ce-nightly ============================ [docker-ce-nightly] async = True bandwidth = 0 base_persistdir = /var/lib/yum/repos/x86_64/7 baseurl = https://download.docker.com/linux/centos/7/x86_64/nightly cache = 0 cachedir = /var/cache/yum/x86_64/7/docker-ce-nightly check_config_file_age = True compare_providers_priority = 80 cost = 1000 deltarpm_metadata_percentage = 100 deltarpm_percentage = enabled = 1 enablegroups = True exclude = failovermethod = priority ftp_disable_epsv = False gpgcadir = /var/lib/yum/repos/x86_64/7/docker-ce-nightly/gpgcadir gpgcakey = gpgcheck = True gpgdir = /var/lib/yum/repos/x86_64/7/docker-ce-nightly/gpgdir gpgkey = https://download.docker.com/linux/centos/gpg hdrdir = /var/cache/yum/x86_64/7/docker-ce-nightly/headers http_caching = all includepkgs = ip_resolve = keepalive = True keepcache = False mddownloadpolicy = sqlite mdpolicy = group:small mediaid = metadata_expire = 21600 metadata_expire_filter = read-only:present metalink = minrate = 0 mirrorlist = mirrorlist_expire = 86400 name = Docker CE Nightly - x86_64 old_base_cache_dir = password = persistdir = /var/lib/yum/repos/x86_64/7/docker-ce-nightly pkgdir = /var/cache/yum/x86_64/7/docker-ce-nightly/packages proxy = False proxy_dict = proxy_password = proxy_username = repo_gpgcheck = False retries = 10 skip_if_unavailable = False ssl_check_cert_permissions = True sslcacert = sslclientcert = sslclientkey = sslverify = True throttle = 0 timeout = 30.0 ui_id = docker-ce-nightly/x86_64 ui_repoid_vars = releasever, basearch username =

    Build yum cache before using EPEL and Docker yum repositories.

    [root@docker-manager-01 ~]# yum makecache fast Loaded plugins: fastestmirror Loading mirror speeds from cached hostfile epel/x86_64/metalink | 9.1 kB 00:00 * base: repo.inara.pk * epel: sg.fedora.ipserverone.com * extras: repo.inara.pk * updates: repo.inara.pk base | 3.6 kB 00:00 docker-ce-nightly | 3.5 kB 00:00 docker-ce-stable | 3.5 kB 00:00 extras | 3.4 kB 00:00 updates | 3.4 kB 00:00 Metadata Cache Created

    Install Docker Engine CE using yum command.

    [root@docker-manager-01 ~]# yum install -y docker-ce ... Installed: docker-ce.x86_64 3:18.09.3-3.el7 Dependency Installed: audit-libs-python.x86_64 0:2.8.4-4.el7 checkpolicy.x86_64 0:2.5-8.el7 container-selinux.noarch 2:2.74-1.el7 containerd.io.x86_64 0:1.2.5-3.1.el7 docker-ce-cli.x86_64 1:18.09.3-3.el7 libcgroup.x86_64 0:0.41-20.el7 libseccomp.x86_64 0:2.3.1-3.el7 libsemanage-python.x86_64 0:2.5-14.el7 policycoreutils-python.x86_64 0:2.5-29.el7_6.1 python-IPy.noarch 0:0.75-6.el7 setools-libs.x86_64 0:3.3.8-4.el7 Dependency Updated: policycoreutils.x86_64 0:2.5-29.el7_6.1 Complete!

    Start and enable Docker service.

    [root@docker-manager-01 ~]# systemctl enable docker.service Created symlink from /etc/systemd/system/multi-user.target.wants/docker.service to /usr/lib/systemd/system/docker.service. [root@docker-manager-01 ~]# systemctl start docker.service

    Docker required following service ports to function.

    Port Protocol Description
    2376, 2377 TCP used for Docker daemon encrypted communication
    7946 TCP, UDP used for container network discovery
    4789 UDP used for container ingress network

    Therefore, allow above service ports in Linux Firewall.

    [root@docker-manager-01 ~]# firewall-cmd --permanent --add-port={2376,2377,7946}/tcp success [root@docker-manager-01 ~]# firewall-cmd --permanent --add-port={7946,4789}/udp success [root@docker-manager-01 ~]# firewall-cmd --reload success

    Verify docker installation by checking its version.

    [root@docker-manager-01 ~]# docker version Client: Version: 18.09.3 API version: 1.39 Go version: go1.10.8 Git commit: 774a1f4 Built: Thu Feb 28 06:33:21 2019 OS/Arch: linux/amd64 Experimental: false Server: Docker Engine - Community Engine: Version: 18.09.3 API version: 1.39 (minimum version 1.12) Go version: go1.10.8 Git commit: 774a1f4 Built: Thu Feb 28 06:02:24 2019 OS/Arch: linux/amd64 Experimental: false

    We have installed Docker Engine CE on CentOS 7 server. Repeat the same steps on remaining two nodes (i.e. docker-worker-01 and docker-worker-02) to install Docker Engine CE on them.

     

    Configuring Docker Swarm Cluster on CentOS 7:

    Since, we have installed and configured three Docker nodes. Now, its time to use them to form a Docker Swarm cluster.

    Initialize Docker Swarm mode on the manager node (i.e. docker-manager-01).

    [root@docker-manager-01 ~]# docker swarm init --advertise-addr 192.168.116.150 Swarm initialized: current node (3b9wynaya1wu910nf01m5jeeq) is now a manager. To add a worker to this swarm, run the following command: docker swarm join --token SWMTKN-1-1yn39o5d0aeiuvdiufp45rwbdbg5gxhrvbp3v38s5q6kcjh0q0-3m3vysmghac17vt3iz89mse9u 192.168.116.150:2377 To add a manager to this swarm, run 'docker swarm join-token manager' and follow the instructions.

    Our Docker Swarm's manager node has been initialized.

    Docker provides us a command to join other workers and managers to our Docker Swarm cluster. Therefore, we use this command on docker-worker-01 node to join it to Docker Swarm as a worker node.

    Connect with docker-worker-01 using ssh as root user and execute the command, as provided by Docker in previous step.

    [root@docker-worker-01 ~]# docker swarm join --token SWMTKN-1-1yn39o5d0aeiuvdiufp45rwbdbg5gxhrvbp3v38s5q6kcjh0q0-3m3vysmghac17vt3iz89mse9u 192.168.116.150:2377 This node joined a swarm as a worker.

    Repeat the same step on docker-worker-02.

    [root@docker-worker-02 ~]# docker swarm join --token SWMTKN-1-1yn39o5d0aeiuvdiufp45rwbdbg5gxhrvbp3v38s5q6kcjh0q0-3m3vysmghac17vt3iz89mse9u 192.168.116.150:2377 This node joined a swarm as a worker.

    Execute following command on any node to see the detail information about that node.

    [root@docker-manager-01 ~]# docker info Containers: 0 Running: 0 Paused: 0 Stopped: 0 Images: 0 Server Version: 18.09.3 Storage Driver: overlay2 Backing Filesystem: xfs Supports d_type: true Native Overlay Diff: true Logging Driver: json-file Cgroup Driver: cgroupfs Plugins: Volume: local Network: bridge host macvlan null overlay Log: awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog Swarm: active NodeID: 3b9wynaya1wu910nf01m5jeeq Is Manager: true ClusterID: 45d0haajzr0gcwy09jglqbyc9 Managers: 1 Nodes: 4 Default Address Pool: 10.0.0.0/8 SubnetSize: 24 Orchestration: Task History Retention Limit: 5 Raft: Snapshot Interval: 10000 Number of Old Snapshots to Retain: 0 Heartbeat Tick: 1 Election Tick: 10 Dispatcher: Heartbeat Period: 5 seconds CA Configuration: Expiry Duration: 3 months Force Rotate: 0 Autolock Managers: false Root Rotation In Progress: false Node Address: 192.168.116.150 Manager Addresses: 192.168.116.150:2377 Runtimes: runc Default Runtime: runc Init Binary: docker-init containerd version: bb71b10fd8f58240ca47fbb579b9d1028eea7c84 runc version: 2b18fe1d885ee5083ef9f0838fee39b62d653e30 init version: fec3683 Security Options: seccomp Profile: default Kernel Version: 3.10.0-957.el7.x86_64 Operating System: CentOS Linux 7 (Core) OSType: linux Architecture: x86_64 CPUs: 1 Total Memory: 468.6MiB Name: docker-manager-01.example.com ID: YC7A:KE73:DL34:YG3W:OVYF:IRVA:ACFV:HB6O:RJHU:B54I:J4BK:SWHX Docker Root Dir: /var/lib/docker Debug Mode (client): false Debug Mode (server): false Registry: https://index.docker.io/v1/ Labels: Experimental: false Insecure Registries: 127.0.0.0/8 Live Restore Enabled: false Product License: Community Engine

    To check the status of nodes in Docker Swarm cluster.

    [root@docker-manager-01 ~]# docker node ls ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS ENGINE VERSION 3b9wynaya1wu910nf01m5jeeq * docker-manager-01.example.com Ready Active Leader 18.09.3 ydgqdyoksx2mb0snhe1hwvco7 docker-worker-01.example.com Ready Active 18.09.3 vz02oe9e82deh8utiymnfobpk docker-worker-02.example.com Ready Active 18.09.3

    Our Docker Swarm cluster is configured successfully.

     

    Creating a Replicated Service on Docker Swarm:

    To demonstrate use of our Docker Swarm cluster, we are creating a replicated service on it.

    [root@docker-manager-01 ~]# docker service create --name web1 -p 80:80 --replicas 5 nginx vus0grc7koogpwipbmzga94k6 overall progress: 5 out of 5 tasks 1/5: running 2/5: running 3/5: running 4/5: running 5/5: running verify: Service converged

    A service with 5 replicas has been created and respective containers are converged across the Docker Swarm cluster.

    To check where the containers are created and running, use the following command on docker-manager-01 node.

    [root@docker-manager-01 ~]# docker service ps web1 ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR PORTS go4pev4p0v53 web1.1 nginx:latest

    docker-worker-01.example.com Running Running 4 minutes ago vevveyhefy4e web1.2 nginx:latest

    docker-worker-02.example.com Running Running 4 minutes ago 5xzt23en2ldy web1.3 nginx:latest

    docker-manager-01.example.com Running Running 4 minutes ago 96zo7cfq5bmx web1.4 nginx:latest

    docker-worker-01.example.com Running Running 4 minutes ago m7fibotbacs5 web1.5 nginx:latest

    docker-manager-01.example.com Running Running 4 minutes ago

    Here, we have created a service using nginx image and publised the port 80 of web1 containers with port 80 of host machines. Therefore, we are also required to allow service port 80 in host machine firewall to access it through the network.

    Execute following command on all nodes to allow http service in Linux firewall.

    [root@docker-manager-01 ~]# firewall-cmd --permanent --add-service=http success [root@docker-manager-01 ~]# firewall-cmd --reload success

    Browse any Docker Swarm node and you will be routed to the default webpage of nginx web server.

    [root@docker-manager-01 ~]# curl http://docker-manager-01 | grep title % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 100 612 100 612 0 0 100k 0 --:--:-- --:--:-- --:--:-- 119k <title>Welcome to nginx!</title> [root@docker-manager-01 ~]# curl http://docker-worker-01 | grep title % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 100 612 100 612 0 0 61937 0 --:--:-- --:--:-- --:--:-- 68000 <title>Welcome to nginx!</title> [root@docker-manager-01 ~]# curl http://docker-worker-02 | grep title % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 100 612 100 612 0 0 75958 0 --:--:-- --:--:-- --:--:-- 87428 <title>Welcome to nginx!</title>

    Our Docker service is successfully configured. Currently, we are using three node address to browse it. However, we can also configure a HTTP load balancer to create a common address to access services on any node.

    Docker Swarm cluster on CentOS 7 has been configured.

    Configure Docker Swarm Cluster on CentOS 7


    YOU MIGHT ALSO LIKE:

    4 comments:

    1. Thank you this tutorial makes a lot clear.
      is it possible to cluster the manager. (ex. 2 managers and 3 workers)

      ReplyDelete
      Replies
      1. Hi,
        Thanks for liking this article.
        Yes, we can configure more than one managers in Docker Swarm.

        Delete
    2. Can you please show us how to update the nginx default index.html on the docker manager and push to the 2 docker nodes? Thank you.

      ReplyDelete
      Replies
      1. Hi, you should create a directory with custom index.html file. While running a container you can mount that directory at /var/www. nginx will server your custom index.html.

        Delete