Learn to use Docker containers

Docker tutorial: Get started with Docker swarm mode

Learn how to create and manage Docker container clusters the easy way, with Docker’s native orchestration tool

Docker tutorial: Get started with Docker swarm mode
Umberto Salvagnin (CC BY 2.0)

Sometimes, you only need one container, because all you need is one instance of an app. But sometimes you need cluster of containers that can respond to changes in demand, and that can be upgraded without taking your application offline.

Docker has a built-in mechanism for clustering containers, called “swarm mode.” With swarm mode, you can use the Docker Engine to launch a whole fleet of application instances across multiple machines. Swarm mode is hardly the only way to create a clustered Docker application, but it is the most convenient way, allowing you to create container clusters without needing additional software.

In this article, I look at how swarm mode compares to more upscale solutions like Kubernetes, and the steps needed to get a basic swarm up and running.

Docker swarm mode vs. Kubernetes

Swarm mode provides you with a small-scale but useful orchestration system for Dockerized apps. Apps can run in a decentralized fashion, with no one node being the master node, although you will need to designate at least one node as a manager for the cluster as a whole.

An app, or “service” in swarm mode parlance, is set up by describing the ideal state for the app—how many replicas to run, for example. The Docker engine then ensures that state is maintained across the cluster. Any changes, including rolling out a new version of an app, can be rolled out gradually across the cluster without having to shut down the entire application.

If these sound like the features also offered by Kubernetes, you’re right. Despite these similarities, Kubernetes and Docker swarm mode have big differences.

  • Kubernetes is a much larger application that is maintained by a third-party community. Swarm mode is a native part of Docker. If you have the most recent version of Docker, you have access to swarm mode.
  • Kubernetes requires a great deal of additional work to learn, set up, and manage. Swarm mode can be configured with far less effort and conceptual overhead.
  • Kubernetes is designed for complex, enterprise-scale workloads. Swarm mode is better suited to smaller, more straightforward loads, such as internal-facing applications that still need scaling or rolling deployment.
  • Kubernetes has native features for dynamically scaling loads to meet demand. Swarm mode does not.
  • Swarm mode can natively use Docker Compose definition files for applications. Kubernetes requires Compose files to be converted into its own definition file format, although there are tools to do that.

In short, Kubernetes is designed for complex cluster deployments that could grow enormously. Docker swarm mode is suitable for smaller, simpler, and more straightforward deployments that will likely never grow larger than a few nodes.

Create a Docker cluster with Docker swarm mode

Swarm mode requires at least three Windows, MacOS, or Linux hosts running Docker Engine. At least one host needs to be designated as a manager, with the others designated as workers. The manager needs to have a fixed IP address, and all nodes need to have open TCP ports 2377 and 7946, and UDP ports 7946 and 4789.

A swarm is launched by running docker swarm init on the host you’ve chosen as manager:

docker swarm init --advertise-addr 192.168.xx.xx

where the address specified is the IP address for the manager machine.

The Docker client will respond with a note about the commands to supply to add another worker. This involves running docker swarm join on the worker machine, along with a token—a long, complex character string—supplied by the manager.

Once you’ve started the workers, run docker node ls on the manager to make sure that the manager and its workers are running.

Deploying services in Docker swarm mode

To start a swarm-based Docker application, or “service,” you run the docker service create command on the manager:

docker service create --replicas 1 --name servicename imagename cmd

  • --replicas indicates how many instances of the service to run.
  • --name gives the service a distinct name, in this case servicename.
  • imagename is the name of the image to pull from the repository for this service.
  • cmd is a command to run within the image once it’s started, if any is needed.

Many more options are available, but these are the minimum needed to start the deployment.

Once you have services deployed, you can run docker service inspect on the manager to see details about the created services, and docker service ps to see details about each node’s service load.

Managing services in Docker swarm mode

Most of the time, a launched swarm should just work. But sometimes you need to change its behavior to meet demand or handle other circumstances.

Scaling a service

If you need more (or fewer) instances of a specific service, you can run docker service scale on the manager to do so:

docker service scale myservice=3

The scaling process may take some time to complete; use docker service ls to see if the changes have taken effect.

Note that if you set the replica count to 0, the number of replicas of the service is set to 0, but the service itself remains active.

Draining nodes

Sometimes you need to move replicas out of a specific node so that the node can be shut down or rebooted. To do that, log into the manager node and issue the command:

docker node update --availability drain nodename

The changes may take a moment to show up; use docker service ps to see if the service load for that node has been removed.

To restore the load for a specific node, use:

docker node update --availability active nodename

Again, unlike Kubernetes, Docker swarm mode does not include autoscaling. It is possible to build autoscaling mechanisms by hand that would issue the appropriate scaling commands to a Docker swarm, but swarm mode can’t handle this natively.

Updating services in Docker swarm mode

If you have a new version of a specific service that you want to roll out across nodes, issue the following command from the manager node:

docker service update --image imagename:version servicename

This replaces every instance of the service servicename with the image described with imagename:version.

You can roll back to the previous version of that service by issuing this command:

docker service update --rollback servicename

You can use other service update options to control how the update is rolled out—for example, if you want to delay the rollout by a specific amount of time.

Removing services in Docker swarm mode

If you need to remove a service from a swarm, issue the following command from the manager:

docker service rm servicename

Note that if all nodes in a swarm are no longer running any services, that does not mean the swarm has been shut down. It just means the service count for the swarm is now zero.

To dismantle a swarm, you first need to remove each of the nodes from the swarm:

docker node rm <nodename>

where nodename is the name of the node as shown in docker node ls.

After removing each node, you must demote the manager node:

docker node demote <nodename>

Once that’s done, the swarm no longer exists.

Automating Docker swarms

Because Docker swarm mode is, by design, a small-scale solution for small-scale clusters, it omits a number of features found in larger-scale solutions. The most notable omission is automation, because a swarm mode user 1) won’t typically need it, 2) will fill the gap with some custom-designed automation, or 3) will graduate to a more sophisticated product.

It’s not difficult to find shell scripts (here’s one) that automate the process of getting a Docker swarm up and running. If you’re handy with Python, you can use the Docker-py library to automate interactions with Docker Engine, including Docker swarm mode.

It is also possible to use such components to add monitoring and scaling functionality to the swarm. At that point, though, you may want to consider a more robust solution such as Kubernetes, which was built to handle those more advanced scenarios.

Copyright © 2018 IDG Communications, Inc.