Learn to use Docker containers

Docker tutorial: Get started with Docker Compose

Learn how to use Docker’s native service configuration and deployment tool for testing and debugging multi-container apps

Docker tutorial: Get started with Docker Compose
Thinkstock

Containers are meant to provide component isolation in a modern software stack. Put your database in one container, your web application in another, and they can all be scaled, managed, restarted, and swapped out independently. But developing and testing a multi-container application isn’t anything like working with a single container at a time.

Docker Compose was created by Docker to simplify the process of developing and testing multi-container applications. It’s a command-line tool, reminiscent of the Docker client, that takes in a specially formatted descriptor file to assemble applications out of multiple containers and run them in concert on a single host. (Tools like Docker Swarm or Kubernetes deploy multicontainer apps in production across multiple hosts.)

In this tutorial, I walk through the steps needed to define and deploy a simple multi-container web service app. While Docker Compose is normally used for development and testing, it can also be used for deploying production applications. For the sake of this tutorial, I concentrate on dev-and-test scenarios.

Docker Compose example

A minimal Docker Compose application consists of three components:

  1. A Dockerfile for each container image you want to build.
  2. A YAML file, docker-compose.yml, that Docker Compose will use to launch containers from those images and configure their services.
  3. The files that comprise the application itself.

In this example, I’m going to create a toy web messaging system, written in Python using the Bottle web framework and configured to store data in Redis. If used in production app, it would be horrendously insecure and impractical (not to mention underpowered!). But the point is to show how the pieces fit together, and to provide you with a skeleton you could flesh out further on your own.

download
Download and unpack this Zip file into a working directory to follow along this Docker Compose tutorial. InfoWorld

To obtain all the pieces at once, download and unpack the Docker Compose example file into a working directory. You will need to have the most recent version of Docker installed; I used version 17.12.01, which was the latest stable version at the time of writing. Note that this tutorial should work as is on Docker for Windows, Docker for Mac, and the conventional Linux incarnation of Docker.

The Dockerfile included in the bundle is simple enough:

FROM python:3
ENV PYTHONBUFFERED 1
ADD . /code
ADD requirements.txt /code/
WORKDIR /code
RUN pip install -r requirements.txt
CMD python app.py

This set of commands describes an image that uses the stock python:3 image as its base, and uses two files (both included in the .zip bundle): requirements.txt and app.py. The former is used by Python to describe the dependencies of an application; the latter is the Python application itself.

The elements of the docker-compose.yml file are worth examining in detail:

version: ‘3’

services:
  redis:
    image: redis
  web:
    build: .
    command: python3 app.py
    volumes:
      - .:/code
    ports:
      - “8000:8000”
    depends_on:
      - redis

The version line specifies the version of the Docker Compose file format to use. Docker Compose has been through a number of revisions, each tied to specific versions of the Docker engine; version 3 is the most recent as of this writing.

The services section defines the various services used by this particular container stack. Here, I’m defining two services: redis (which uses the redis container image to run the Redis service), and web (the Python application). Each service descriptor provides details about the service:

  • build: Describes configurations applied at build time. It can be just a pathname, as shown here, or it can provide details such as a Dockerfile to use (instead of the default Dockerfile in the directory) or arguments to pass to the Dockerfile during the build process.
  • command: A command to run when starting the container. This overrides the CMD statement supplied in the container’s Dockerfile, if any.
  • volumes: Any paths to volumes to be mounted for this service. You can also specify volumes as a top-level configuration option and reuse any volumes defined there across multiple containers in the docker-compose.yml file.
  • ports: Port mappings for the container. You can use a simple format, as shown here, or a more detailed format that describes which protocols to use.
  • depends_on: Describes the order of dependencies between services. Here, because web depends on redis, redis must be brought up first when Docker Compose starts the app.

There are many more options available in services, but these few are enough to get a basic project started.

Essential Docker Compose commands

Once you have unpacked the sample app in a working directory, the next step is to build it and get it running as a single, basic instance. Through this process,you can get to know most of the major commands for Docker Compose and understand how they’re used in the context of a project.

Note that all these commands are run from the directory containing the docker-compose.yml file and the other files for the project.

docker-compose build

The first command to run with a new Compose project, build, assembles any images for the project that need to be built from scratch according to the project’s Dockerfile. Any parts of the project that depend on an image as is—in this case, Redis—are not handled here.

The results of the build process are echoed to the console. If all goes well, you should see a new image in the list of local images when you type docker images.

docker compose build IDG

Output from a successful run of docker-compose build for our sample application. Note how each step in the Dockerfile is documented in detail.

docker-compose up

The up command sets up networking and launches containers from all the images in the stack, as described in the docker-compose.yml file. Note that if you need images that aren’t present locally, they’re pulled at this stage. 

Once the containers are running, you see details from the running containers echoed to the console. The console remains attached to the containers, so you can stop the running application stack by pressing Ctrl-C.

After the application finishes launching, point a web browser to http://localhost:8000. You will see a web page for a primitive messaging app that allows you to leave messages for any of three people (David, Krista, and Adi) and to read the messages sent to them. The web front end is provided by the Python script; the data is stashed as simple key-value pairs in Redis.

docker-compose down

The down command takes down the containers and network components used to run your application. Hitting Ctrl-C after running docker-compose up terminates the running instances of everything, but doesn’t perform any cleanup. Use down when you want to both stop and remove the networks, volumes, and images created by running up — for example, at the end of a day’s test sessions, or when clearing the decks to work with a new app.

docker-compose up --no-start

If you want to set up a Docker Compose application but you don’t want to start it (for instance, if you want to run it noninteractively) use up --no-start instead of just up. Originally, Docker Compose had a command called create that served this function, but it has been deprecated in favor of using up with the --no-start flag.

docker-compose up --scale

If you want to launch multiple replicas of a service when bringing up your app, you can either declare this in your Docker Compose file using deploy, or use the --scale switch when running Compose. The example app doesn’t require this, but the --scale switch comes in handy when you need to test the behavior of an app with multiple service instances running.

docker-compose start/restart/stop/pause/unpause

The start, restart, and stop commands allow you to start, restart, or stop a Docker Compose application that has been prepared with the up --no-start command. When executed, they immediately return control of the console to the user, instead of attaching the console to the container. The pause and unpause commands let you suspend and continue execution of the containers, rather then stopping or restarting them entirely.

Additional Docker Compose commands

A few other Docker Compose commands are useful for debugging or monitoring running applications:

  • config: Validates and dumps the Docker Compose file used for the current project. This will let you lint out any possible errors it contains before you try the build or deployment process.
  • events: Streams to the console events for every container in the project. Use the --json flag to print the results as JSON (useful for ad hoc piping into a file).
  • port: Prints the public ports for a port binding on a running service instance. Useful if you need to discover which port to connect to for a service.

Finally, many of the other commands, like exec, images, logs, kill, rm, ps, pull, push, and top, echo the same functionality you’ll find in the main Docker client.

Docker Compose and distributed application bundles

Once you have a working multicontainer application created with Docker Compose, your next step is putting it into production. Just as a Dockerfile can be built into an image for creating containers, a Docker Compose file can be built into an image for creating entire application stacks.  

The most recent versions of Docker introduced the concept of a distributed application bundle, or DAB. This feature, still considered experimental, lets you create a DAB from the Docker Compose file that can be deployed as a distributed, multicontainer application to a Docker Swarm cluster.

To generate a DAB, first push the container images you’ve created to a registry so they can be obtained later by any systems that will run your application. Then run docker-compose bundle in the directory with your Compose file. This will result in an image that can be deployed to Docker Swarm, or to other services that support the DAB format.

How you take the next step will depend on how you plan to deploy the production app. Docker Swarm, for example, accepts Docker Compose apps directly, as does Docker Enterprise. Kubernetes, on the other hand, does not work with the Docker Compose format, but there are tools to translate a Docker Compose file to a Kubernetes Resources file and generate other Kubernetes artifacts as well.

Regardless of your production destination, the Docker Compose file you have created will continue to be useful for testing, first and foremost, as you continue to develop your multicontainer Docker app.

Copyright © 2018 IDG Communications, Inc.