The hidden benefits of Docker for QA

Docker and containers can make QA testing easier, faster, and more effective

The hidden benefits of Docker for QA
Thinkstock

Docker has been dominating the devops conversation since its inception in 2013, sparking interest in container-driven pipelines and helping organizations to transform applications by shifting to full-stack deployments on containers. Following market interest, many cloud vendors have also rushed to support Docker in their services, in anticipation of future development teams that look significantly different than they do today. No matter what, containers are going to change the way applications are built, tested, and deployed.

Despite all this activity, there isn’t consensus on how exactly containers fit into existing applications and development teams. Devs are rabid fans of Docker but tend to use it only for sandboxing and prototyping. QA doesn’t see how Docker affects their workflow. And ops is wary of the security issues around Docker, preferring traditional VMs instead. Even as devops aims to bring these three functions closer together, getting these groups to agree on the importance and role of containers is the only way to embrace the new architecture and release processes.

In this article, we focus on the impact Docker has on QA and how you can be better prepared and even embrace the Docker takeover.

The Docker difference

As a QA leader in your organization, you probably overhear your dev counterparts rave about the world of difference that Docker makes when building applications. From their point of view, they can deploy their code to a local container, do a local test, make an image, and pass it on. It’s hard for them to understand why that image is not merely thrown into production.

But you still may not be convinced Docker matters to QA. After all, most Docker implementations are dev only. And the systems for testing applications in containers have not been established.

Whatever your impression of Docker, it’s becoming clear that the container revolution will not be restricted to development. Docker the company is rapidly releasing tools to calm the fears of operations, and many third-party solutions have made the security and oversight around Docker containers as easy as that of any other infrastructure. But if dev adopts containers, as does ops, where does that leave QA when there are many more versions of applications in the integration environment, and each instance has a container?

As container-driven pipelines emerge, the pressure on QA to embrace containers -- and ensure that QA processes keep pace with new architectures and release practices -- will be unavoidable. But this isn’t bad news. In fact, if you realize the benefits Docker brings to QA, you’ll be eager to get started right away.

Unlike virtual machines, each of which includes a full guest OS and can run many gigabytes in size, Docker containers share a common OS kernel and weigh only a few megabytes. This makes them easy to move around and quick to start, run, and scale. Containers pack a lot more applications or versions of an application per server than VMs, so they're ideal for load and performance testing. New Docker containers can be spun up in the cloud in seconds and are great for testing an application against real-world user behavior.

For QA, Docker solves the classic problem of ensuring that you test the same application you ship. Because everything the application needs to run is packaged in the container, it can run predictably and consistently across the pipeline, and with different configurations -- no more pesky variables to track down. If a configuration issue is the source of a bug, then the container image in use is the point where it should be addressed.

Docker and microservices

Container technology not only impacts the delivery chain and release process, but it can also impact the application architecture itself. Docker handles many tasks for organizations, but the key areas where organizations can take the next step in modern software delivery are in the form of microservices.

A microservices architecture allows applications to be split into mutually exclusive services, which could even be written in different languages and managed by different teams. Microservices architecture enables, and even requires, a decentralized team structure. This means teams are cross-functional with the ability to develop, test, and deploy the application they build without dependencies on any other team.

This decentralized approach makes it easy to monitor and troubleshoot issues. You can easily zero in on the service that is malfunctioning. With monolithic applications, every minor change required you to deploy your entire application. With microservices, you can deploy only the service that needs the change and leave the rest of the application untouched.

Microservices architecture is the way of the future, but many organizations are not ready for it. Pioneered by the likes of Google, Amazon, and Netflix over the past decade, microservices have not been a necessity for smaller development shops. However, web applications are becoming increasingly complex. They can no longer be scaled on a single server or as a monolith. This is especially true when your team grows past 50 or 75 members, and productivity takes a hit because of the silos that crop up. Microservices architecture is becoming the preferred way to build software teams and applications that scale.

While not every organization is ready to adopt a microservices architecture immediately, it’s never too late to start moving in that direction. You could take small steps by breaking out one part of your application as a service and tackling additional pieces over time to eventually cover your entire application.

No matter how quickly or slowly you transition to a microservices architecture, you will need Docker to package services independently of one another. A decentralized team that owns the entire pipeline will need a consistent and reliable way to move the application across the pipeline and improve collaboration. This is exactly what Docker makes possible.

Implementing QA in container environments

To accommodate the rapid pace of independent microservices deployments, QA must switch from a linear process to support non-linear deployments. In other words, while it is possible for QA to ignore containers, but only at the risk of being a bottleneck. After an application is deployed, it's addressable as any application, be it microservices or monolith. From that point on, the application can be tested like anything else. However, if QA does not change its processes, then the goal of replacing old infrastructure with new -- or old containers with new -- becomes more difficult to accomplish.

For example, a user profile service might be deployed daily, shopping cart service might be deployed weekly, and a log-in service deployed only once every three months. But the actual release dates may not be predictable. After all, part of the goal is to release whenever you have a business need. There needs to be a QA strategy that is dynamic enough to support testing any service at any point in time.

If QA becomes the bottleneck, it is the equivalent of having containers only with the developers and not in the product. But it’s not simply about keeping ahead. Containers also support QA in the following ways:

  1. Sharing containers, not bugs. If an issue comes up, you can share images instead of bug reports. An image is the application, potentially even at the moment in time when a test failed. With modern testing tools that give you screenshots and even videos, you can provide whole applications. “Can’t reproduce” is no excuse, because you are giving them the machine.
  2. System-level bugs are the hardest to catch, and it can be difficult to discover their root cause. But with containers the process is easier, because the system configurations are based on the image that was used during deployment. If orchestration tools are used to create images, it should be easy to show what system-level change was made and why that is the source of a bug.
  3. It’s much easier to pin frameworks, libraries, and artifact versions. Containers allow you to talk in absolutes because no matter how many times you do a release, you know exactly what the image contains, and any replication of that image will be consistent.
  4. Run more tests faster. Because deployments are containers, you can deploy identical containers at the same time and run different portions of your entire test suite against them. If you build your test suite in such a way that you have many smaller tests, this means you can run subsets of the entire test suite at the same time. But it also means you can run tests with slight variations. It’s a new way to do exploratory testing and help operations and development teams spot areas of improvement.

The benefits of containers add up to support QA’s ability to communicate issues, support the delivery chain further up and downstream, and build in the consistency that testers have always sought to fight the horror of system-level issues.

There is no single way to run QA with container-driven applications. But there is one criterion, and that is automation. Due to the speed of modern development and the increasing number of things to test, test automation is a must.

Testing infrastructure for containers

Due to the nature of containers, the test infrastructure needs to be mutually exclusive of the containers, including the host operating system and datacenter. You have three basic options here.

  1. You could leverage your existing test infrastructure, but most of the time it is not set up to run tests on containers that have relatively short life spans. And the ad-hoc infrastructure is going to put more operations burden on the QA team, as operations likely will refuse to provide one-off support of environments that differ substantially from what is found in production.
  2. You could also consider containerizing the testing grid to match the application architecture. However, containers are not designed to support multiple application installations, such as multiple browsers. Perhaps even more important, containers lack the ability to support operating systems other than the host OS they are running on, which currently is only the distribution of Linux the organization chooses.
  3. The final and best option in order to maintain flexibility and reduce operations overhead and complexity is to use a cloud-based test automation tool. The cloud-based automation tool allows you to trigger a test run upon deployment to integration environments. In this case, the integration environment acts as a datacenter for your entire application, be it one monolithic container or many microservices.

With this setup, you can test your application against all the desired operating systems and browser combinations you want, without having to build an entirely separate and unique infrastructure solely for testing. And the power of parallel testing and visualization of results allows QA teams to keep pace with high-speed testing and release cycles and not stumble under the pressure of rapid delivery chains.

Growing pains

Despite the advantages, Docker is often used only for sandboxing and in some cases for development, but rarely in QA and even more rarely in production. This is because most organizations are skeptical about the level of security that containers provide. As multiple containers share the same OS kernel, a compromise of the OS affects all containers. Because of their defined boundaries and rigid logic, VMs are more mature when it comes to security.

That said, DockerCon and other Docker-related events and conferences provide plenty of examples of companies that are using Docker in production. With these use cases on the rise, it’s only a matter of time before Docker makes its way to more and more production environments. Docker receives extensive support from every corner of the industry and is poised to mature very quickly. It’s easy to see that Docker won’t be limited to sandboxes and dev environments for long.

This means that as a QA leader in your organization, you need to brace yourself for the inevitable Docker invasion. It’s not a question of if, but when. One executive who attends a conference and finds that one of your competitors already uses Docker across their pipeline could launch the shift.

Lubos Parobek is VP of product at Sauce Labs, where he leads strategy and development of Sauce Labs’ web and mobile application testing platform. His previous experience includes leadership positions at organizations such as Dell KACE, Sybase iAnywhere, AvantGo, and 3Com.

New Tech Forum provides a venue to explore and discuss emerging enterprise technology in unprecedented depth and breadth. The selection is subjective, based on our pick of the technologies we believe to be important and of greatest interest to InfoWorld readers. InfoWorld does not accept marketing collateral for publication and reserves the right to edit all contributed content. Send all inquiries to newtechforum@infoworld.com.

Copyright © 2016 IDG Communications, Inc.