
In truth, 2016 wasn’t The Year of the Container. That was 2015, when the possibility and promise of containers came along and knocked IT for a loop .
But not everyone wanted creative disruption. Many folks wanted dependable, reliable infrastructure, and they saw in containers a method to do so that had never been done before. The good news was that despite all the momentum around containers in 2016, major parts of the ecosystem began to stabilize. The novelty’s worn off, but in a good way―it means there’s now more attention on how to do containers right, not merely to do them at all.
Here are five of the major developments in the container world that defined the year and set the course for what’s next.
1. Containers finally got boring―we hopeWith all the excitement that arose around containers in 2015, by 2016 people were beginning to feel Container Fatigue. The sheer speed of change in the container space―new versions of Docker! new container features! new container runtimes!―left a bad taste in the mouth of those who wanted to build reliable, safe, predictable infrastructure with containers.Why, some complained, do we need to have the Swarm orchestration system out of the box, when other items clearly warranted attention?
Maybe what was needed was a “boring” fork of Docker, where the most broadly useful functions were broken out and guided by the same kind of overarching community that had been formed around container standards .
Then, in December, the companyannounced what sounded like a step in that direction: Docker’s core containerd component was to be spun off and governed under a separate community, so products that aren’t Docker can be built from it if need be. (Think Google Chromium or V8.)Docker can concentrate on the product side, so enterprises get the end-to-end solutions they want; hackers and devops folks get a stable underlay for their projects and infrastructure.
There’s still a lot about this idea that could go sour. No actual names have been floated yet for which community will get containerd; if it’s one where Docker wields outsized clout, it won’t mean much. But it’s wise for Docker to reducethe tension between the constant pushing of the envelope that Docker does as a for-profit company, and theopen source technology Docker for which has become the de facto leader and instigator.
Here’s to the boring ones. Without them, we’d never get anything done.
2. The rise (and rise) of KubernetesEvery time a technology soars into the stratosphere, a number of other, supporting technologies rise along with it. With Docker, it’s been Kubernetes, Google’s software for managing and orchestrating container workloads at scale.
Not everyone using containers needs an industrial-strength orchestration solution, which is why Docker and Kubernetes have prospered as separate projects―and why Docker has its own, now built-in solution,Swarm (which not everyone was keen on ).
But those who did need container managementreally, really needed it.They needed more thanbetter scalability for their apps; they needed bettersupport for persistency , cross-cloud management features , and many other details that benefited enterprise workloads . Kubernetes aimed to provide all that and more.
Also striking: Third parties, not only Google and Docker, showed growing interest in Kubernetes as a target for contribution and support. Inteljumped in with plans to make Kubernetes run better on its hardware, an adjunct to itsother work beefing up containers.Sometime Docker rivals CoreOS picked up the torch and offeredOperators to run apps that aren’t necessarily suited to Kubernetes to begin with. (See also: Kubernetes on windows.)
3. Windows got Docker―and KubernetesIt’s hard to overstate the importance of container technology generally, and Docker specifically, arriving with a bang on Windows. Think about it: Microsoft revised the Windows kernel to make room for open source technology, so it could run as-is with minimal modification. That was unthinkable in the Ballmer days, to be sure, but this isn’t Ballmer’s Microsoft and hasn’t been for a long time.
Ultimately, Microsoft needed Docker more than Docker needed Microsoft.Microsoft correctly sensed enterprise customers that couldn’t run container workloads on Windows Server would have that many more excuses to decamp to linux. But in the end, it’s a win forboth parties: Docker containers have one more platform they can run on, and Windows Server has a new way to appeal to enterprise customers.
It wasn’t only Docker that got added to Windows Server, butKubernetes as well―for comparable reasons. With Kubernetes such a big hit overall, it only made sense to support it on any platform where Docker was also available. But Kubernetes on Windows also orchestrates Windows Server-native Hyper-V Containers. It’s a bid to make Kubernetes useful by managingexisting, Windows-nativeworkloads, instead of forcing a switch to Docker containers.
The one big downside of Docker on Windows: It’s only available on Windows Server 2016 or later. But that’s opened up a golden opportunity for folks likeWinDocks, which provides containers on earlier versions of Windows Server that will likely hang around for a long time.
4. Containers became more of a desktop technology“Desktop,” in this context, has two meanings. One: The workspaces and tool sets provided for developing with containers got a little friendlier. A new version of Docker for desktop development was meant to help developers put together containers on their notebooks, then shuttle them into productionwith fewer issues arising from the differences in the two environments.
The other meaning: More experimentation started in earnest with containers as a delivery mechanism for desktop software.Flatpak used some of the underlying technology as Docker containers, whileSubuser was a straight-up repurposing of Docker for running interactive apps. Both hinted at to-be-tapped possibilities for containers as a user technology, not an item on a server.
5. People finally got the idea of what containers are forIn other words, people finally seemed to realize that containers aren’t VMs. They’re an entirely new mode of deployment for IT, and theusage patternsaround containers are reflecting that―they’re used mainly for short-running jobs that are discarded when no longer needed, which wasn’t as practical with VMs.
Overall, there is a growing s