For nearly two years, Linux containers have dominated the world of enterprise IT, and for good reason — among others, they take on issues that virtualization simply cannot within application development and computing at scale and allow for the enterprise world to truly embrace concepts like devops and microservices (the Service Oriented Architecture dream from years gone by). That sound you hear is IT vendors stampeding towards the container bandwagon, but, as with every emerging tech trend, this isn’t always a good thing, as not everyone is walking the walk, regardless of what the business might actually say.
An extension of the operating system, specifically the Linux kernel, Linux containers are built from a runtime and format, and then orchestrated with network and storage resources across clusters of hosts. The end result is a set of lightweight, dynamic and secure application services, each self-contained inside a Linux container, and able to run by themselves or in conjunction with other containerized applications to create a much more flexible, yet complex enterprise application.
These capabilities represent a significant threat to some segments of IT and deliver yet another body blow to proprietary computing stacks (note the “Linux” in the name). It should come as no surprise, then, that Linux containers and the inextricably linked Docker container format face several not insignificant threats, even before the technology is out of its proverbial diapers. So for a technology that is generally viewed so positively, what’s trying to kill containers?
At a high level, there are three concerning issues facing Linux container adoption and container-based infrastructure deployments:
AI Weekly
The must-read newsletter for AI and Big Data industry written by Khari Johnson, Kyle Wiggers, and Seth Colaner.
Included with VentureBeat Insider and VentureBeat VIP memberships.
- Fragmentation of standards
- Proprietary code and “fauxpen” source
- Container washing
Fragmentation
Fragmentation is, arguably, the most dangerous threat facing Linux container adoption in the business world. Without clear, readily-adopted standards, specifically at the image format and orchestration levels, most enterprises will be loathe to embrace the technology. The simple answer to “why” is that no IT decision-maker wants to be responsible for backing a losing horse. To use an example from the consumer world, consider someone who stocked their house with HD-DVD players and accessories only to find out that Blu-Ray won the day. Now multiple that cost loss exponentially, and that’s what the enterprise world faces when it comes to Linux container standards.
While fragmentation is by far the most serious threat facing Linux container adoption, it’s also the one that is being most readily addressed. Two new foundations, both governed by the Linux Foundation, have come to light in recent months to help remove the specter of fragmentation from the world of Linux containers.
The Open Container Initiative aims to deliver the low level standards for the container image format and runtime for container-based application development.
The Cloud Native Computing Foundation seeks to drive standards, best practices and interoperability for the technologies used to develop, run and scale distributed applications, with Kubernetes as the starting point for container orchestration.
While these two organizations are a fantastic start, the complexities of the overall container lifecycle require constant attention from a standards viewpoint. As the technology continues to be refined for enterprise adoption, it’s highly likely that more “format war”-type scuffles will ensue, so it will be up to the open source community at large to help regulate and codify these arguments into common standards and practices. With both initiatives modeling after open source community best practices, we can be hopeful that standardization continues to evolve in the open, without hindering or slowing down innovation.
‘Partially open’ is ‘fully closed’
The threat of fragmentation also raises another critical issue when it comes to containers — that of open-core or “fauxpen” offerings around Linux containers. Despite the “Linux” in the name, containers hold a broad appeal across proprietary and open stacks; problems will arise, however, when proprietary code and services begin to worm their way into container-based solutions that are supposedly fully open. The “fauxpen” threat isn’t new — we’ve seen it first with Unix and most recently with cloud computing solutions, particularly platform as a service (PaaS) and OpenStack-based offerings that are ostensibly open but layer proprietary technologies on top of open-source foundations.
Linux containers, however, are early in their adoption cycle for enterprise IT (although the pace is certainly picking up); if proprietary hooks are landed into the technology now, it’s almost a sure thing that IT’s mood will sour on what should be an innovation, not a continuation of proprietary legacy systems. From closed stacks to exorbitant licenses to greatly-scaled back innovation, adding “fauxpen” code to the foundational technologies built on the blood, sweat and tears of the community can quickly dampen enthusiasm and breakthroughs around the open base.
Container washing
During the heady days of the cloud computing boom (which we are arguably still undergoing), the notion of cloud washing was born. Effectively, an IT vendor would take a pre-existing product and “wash” the marketing collateral, spec sheets, and so on with cloud jargon, hopefully convincing customers and prospects that this existing vendor was a player in the burgeoning world of cloud computing.
Now, we’re seeing container washing in the same vein, with vendors and solutions that only tangentially relate (or don’t relate at all) to the container boom trying to horn their way into the conversation. The threat here is far more surreptitious than fragmentation or fauxpen source; it’s one of subverting what a container actually is.
An example is the confusion on containers versus virtual machines. We certainly can run a virtual machine in a container or vice versa, but the two technologies solve different problems. Virtualization provides abstraction by combining infrastructure services with application code, containers allow for a clear separation in lightweight software assets that lend themselves ideally as the prime method to deliver services.
By conflating existing technologies with those presented by containers, this threat creates more confusion and headaches for enterprises when it comes to container adoption. Instead of just picking what works best for them, IT teams now have to investigate whether a solution actually delivers the benefits of Linux containers or it’s purely marketing speak. This could easily lead to drop in adoption, as IT teams are known for path of least resistance when it comes to deploying new technologies, highlighted by the age old phrase of “You don’t get fired for buying [well known IT vendor here].”
The above three are the biggest, though not the only, threats facing the growing ecosystem around Linux containers. This isn’t to say that these issues will actually impede adoption; fragmentation is already being addressed, and IT leaders who have been through the Unix and cloud wars are understandably wary of open core and jargon-washed products. But it’s important to remember that the path to innovation is fraught with potholes; it was for Linux, it was for cloud computing, and it is for Linux containers.
It’s up to the open-source community, the enterprise world, and the startups and established IT vendors building the innovations around Linux containers to bypass these obstacles, through collaboration and commitment to helping containers truly realize their enterprise potential.
Lars Herrmann is general manager of the integrated solutions business unit and container strategy at Red Hat.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Learn More