2016: When Linux containers became mainstream

After making a splash with IT, container technology faced its next big mission: Become truly useful, not merely a gimmick. Here's how that unfolded

colorful containers at dock
Thinkstock

In truth, 2016 wasn’t The Year of the Container. That was 2015, when the possibility and promise of containers came along and knocked IT for a loop.

But not everyone wanted creative disruption. Many folks wanted dependable, reliable infrastructure, and they saw in containers a method to do so that had never been done before. The good news was that despite all the momentum around containers in 2016, major parts of the ecosystem began to stabilize. The novelty’s worn off, but in a good way—it means there’s now more attention on how to do containers right, not merely to do them at all.

Here are five of the major developments in the container world that defined the year and set the course for what’s next.

1. Containers finally got boring—we hope

With all the excitement that arose around containers in 2015, by 2016 people were beginning to feel Container Fatigue™. The sheer speed of change in the container space—new versions of Docker! new container features! new container runtimes!—left a bad taste in the mouth of those who wanted to build reliable, safe, predictable infrastructure with containers. Why, some complained, do we need to have the Swarm orchestration system out of the box, when other items clearly warranted attention?

Maybe what was needed was a “boring” fork of Docker, where the most broadly useful functions were broken out and guided by the same kind of overarching community that had been formed around container standards.

Then, in December, the company announced what sounded like a step in that direction: Docker’s core containerd component was to be spun off and governed under a separate community, so products that aren’t Docker can be built from it if need be. (Think Google Chromium or V8.) Docker can concentrate on the product side, so enterprises get the end-to-end solutions they want; hackers and devops folks get a stable underlay for their projects and infrastructure.

There’s still a lot about this idea that could go sour. No actual names have been floated yet for which community will get containerd; if it’s one where Docker wields outsized clout, it won’t mean much. But it’s wise for Docker to reduce the tension between the constant pushing of the envelope that Docker does as a for-profit company, and the open source technology Docker for which has become the de facto leader and instigator. 

Here’s to the boring ones. Without them, we’d never get anything done.

2. The rise (and rise) of Kubernetes

Every time a technology soars into the stratosphere, a number of other, supporting technologies rise along with it. With Docker, it’s been Kubernetes, Google’s software for managing and orchestrating container workloads at scale.

Not everyone using containers needs an industrial-strength orchestration solution, which is why Docker and Kubernetes have prospered as separate projects—and why Docker has its own, now built-in solution, Swarm (which not everyone was keen on).

But those who did need container management really, really needed it. They needed more than better scalability for their apps; they needed better support for persistency, cross-cloud management features, and many other details that benefited enterprise workloads. Kubernetes aimed to provide all that and more.

Also striking: Third parties, not only Google and Docker, showed growing interest in Kubernetes as a target for contribution and support. Intel jumped in with plans to make Kubernetes run better on its hardware, an adjunct to its other work beefing up containers. Sometime Docker rivals CoreOS picked up the torch and offered Operators to run apps that aren’t necessarily suited to Kubernetes to begin with. (See also: Kubernetes on Windows.)

3. Windows got Docker—and Kubernetes

It’s hard to overstate the importance of container technology generally, and Docker specifically, arriving with a bang on Windows. Think about it: Microsoft revised the Windows kernel to make room for open source technology, so it could run as-is with minimal modification. That was unthinkable in the Ballmer days, to be sure, but this isn’t Ballmer’s Microsoft and hasn’t been for a long time.

Ultimately, Microsoft needed Docker more than Docker needed Microsoft. Microsoft correctly sensed enterprise customers that couldn’t run container workloads on Windows Server would have that many more excuses to decamp to Linux. But in the end, it’s a win for both parties: Docker containers have one more platform they can run on, and Windows Server has a new way to appeal to enterprise customers.

It wasn’t only Docker that got added to Windows Server, but Kubernetes as well—for comparable reasons. With Kubernetes such a big hit overall, it only made sense to support it on any platform where Docker was also available. But Kubernetes on Windows also orchestrates Windows Server-native Hyper-V Containers. It’s a bid to make Kubernetes useful by managing existing, Windows-native workloads, instead of forcing a switch to Docker containers.

The one big downside of Docker on Windows: It’s only available on Windows Server 2016 or later. But that’s opened up a golden opportunity for folks like WinDocks, which provides containers on earlier versions of Windows Server that will likely hang around for a long time.

4. Containers became more of a desktop technology

“Desktop,” in this context, has two meanings. One: The workspaces and tool sets provided for developing with containers got a little friendlier. A new version of Docker for desktop development was meant to help developers put together containers on their notebooks, then shuttle them into production with fewer issues arising from the differences in the two environments.

The other meaning: More experimentation started in earnest with containers as a delivery mechanism for desktop software. Flatpak used some of the underlying technology as Docker containers, while Subuser was a straight-up repurposing of Docker for running interactive apps. Both hinted at to-be-tapped possibilities for containers as a user technology, not an item on a server.

5. People finally got the idea of what containers are for

In other words, people finally seemed to realize that containers aren’t VMs. They’re an entirely new mode of deployment for IT, and the usage patterns around containers are reflecting that—they’re used mainly for short-running jobs that are discarded when no longer needed, which wasn’t as practical with VMs.

Overall, there is a growing sense of the real value and place of containers. Companies that aren’t already tech-native won’t benefit much from them. But when done right, they help both developers and operations teams equally, by providing self-contained options to handle tasks that need to be disposable. While work to move legacy apps to the cloud via containers is still in its early stages, it has specific advantages (for example, optimizing costs) that hearken back to VMs.

To that end, containers and VMs aren’t contradictory—they’re complementary. Both have a proper place in enterprise IT, especially as many workloads still aren’t optimized for container delivery. Microsoft itself twigged to it, with a side-by-side implementation of Docker containers and its VM-like container system. We’ll need both for a long time to come.

Copyright © 2016 IDG Communications, Inc.