Skip to main content [aditude-amp id="stickyleaderboard" targeting='{"env":"staging","page_type":"article","post_id":875343,"post_type":"story","post_chan":"none","tags":null,"ai":false,"category":"none","all_categories":"business,cloud,enterprise,","session":"B"}']

How Docker turned intricate Linux code into developer pixie dust

Docker characterizes its simplified Linux containers as a standard method for moving applications from machine to machine.

Image Credit: mark.hogan/Flickr

Every once in a while, a technology comes along that nabs attention. The next thing you know, it’s a mission-critical piece of infrastructure at companies big and small.

Hadoop, MongoDB, and Node.js have gone down this path (as have others). The technology that’s come closest to that desirable status in 2013 might just be the Docker container.

[aditude-amp id="flyingcarpet" targeting='{"env":"staging","page_type":"article","post_id":875343,"post_type":"story","post_chan":"none","tags":null,"ai":false,"category":"none","all_categories":"business,cloud,enterprise,","session":"B"}']

It’s based on open-source technology that emerged in the mid-2000s — Linux containers, which run isolated applications on a single physical server. But a company called Docker has made the technology easier to implement and far more useful. Through Docker, the Linux container has blossomed into a tool that helps developers build one application and easily move it into a testing environment and then a production environment, and then from one cloud to another, all without modifying the code.

In some ways, Docker containers are like virtual machines. But they’re often more lightweight and less demanding on the chips and memory in servers. Plus, the code for building these containers is available for developers to inspect and build on under an Apache open-source license.

AI Weekly

The must-read newsletter for AI and Big Data industry written by Khari Johnson, Kyle Wiggers, and Seth Colaner.

Included with VentureBeat Insider and VentureBeat VIP memberships.

Since it became freely available in March, startups have been assembling products based on it, sometimes under the phrase “Docker-as-a-Service,” including Orchard and Copper.io’s StackDock.

Big companies have leapt to embrace Docker containers, too. In making its Infrastructure-as-a-Service (IaaS) public cloud available to all earlier this month, Google said it was adding support for operating system software, including Docker. Red Hat has been moving closer to Docker, too, with support in the new beta version of the Red Hat Enterprise Linux 7 package.

CenturyLink is thinking up a next-generation platform for cloud computing, in a project called CTL-C, and Docker will play a considerable role in it. Fast-growing IaaS provider DigitalOcean provides an application to launch Docker containers in its droplets, the company’s term for virtual servers.

VMware, a company firmly rooted in the virtual-machine camp, provides support for Docker in vSphere, for running virtual machines on physical servers, and the vCloud Hybrid Service, the VMware public cloud that connects to companies’ on-premise data centers. A spokeswoman claimed that in an email to VentureBeat, although the company hasn’t made much noise about Docker.

The latest example came from China, where search company Baidu said its Platform-as-a-Service (PaaS) public cloud, the Baidu App Engine, “is now based on Docker,” according to a press release Docker put out last week. Baidu likes Docker containers because they handle multiple languages and frameworks and provide for a lower cost of development in comparison with more traditional sandboxes.

What these babies can do

And Docker isn’t just another open-source tool to commercialize. Engineers at major companies have been talking about how Docker fits into key workflows.

[aditude-amp id="medium1" targeting='{"env":"staging","page_type":"article","post_id":875343,"post_type":"story","post_chan":"none","tags":null,"ai":false,"category":"none","all_categories":"business,cloud,enterprise,","session":"B"}']

EBay Now, the company’s fast delivery service, depends on Docker containers for developing and testing purposes, and production use was “coming,” said Ted Dziuba, a senior member of technical staff at eBay, during a talk he gave at a Docker event in July.

“The same container for an application works everywhere,” so long as developers know how to connect containers with one another, he said. And that simplifies life for developers.

Docker containers have also made it easy to set up rich development environments at RelateIQ, a startup with software for keeping on top of sales contacts. John Fiedler, who works on IT operations for the company, wrote about Docker’s use in a couple of recent blog posts and noted that the company will soon start using Docker in production.

Russian search company Yandex relies on Docker containers to isolate applications in its open-source PaaS, Cocaine. Yandex uses Cocaine for internal purposes and as a platform to provide its own internet browser to consumers.

[aditude-amp id="medium2" targeting='{"env":"staging","page_type":"article","post_id":875343,"post_type":"story","post_chan":"none","tags":null,"ai":false,"category":"none","all_categories":"business,cloud,enterprise,","session":"B"}']

Developers at Rackspace’s email service, Mailgun, and CloudFlare have also publicly discussed Docker, but you get the point. Developers like the container model, specifically Docker’s version, and companies are taking it more seriously.

All of this has happened within just a few months of Docker, the company behind Docker containers, making the code available for developers to check out.

Where containers came from

Docker containers started out as internal technology for PaaS provider dotCloud, Docker chief executive Ben Golub said in an interview with VentureBeat. Engineers at dotCloud worked with Linux containers as well as other open-source technologies, such as Linux kernel features called cgroups and namespaces, in such a way that the containers didn’t require so much complexity.

“There was a bunch of sort of arcane languages you needed to learn how to use in order to use LXC,” he said. “We provide a standard API (application programming interface) that made it really easy for developers to take any application and package it inside of a container and that made it really easy for any system administrator to run a server that had 10 or 100 or more containers running on it.”

[aditude-amp id="medium3" targeting='{"env":"staging","page_type":"article","post_id":875343,"post_type":"story","post_chan":"none","tags":null,"ai":false,"category":"none","all_categories":"business,cloud,enterprise,","session":"B"}']

After our conversation, Golub sent an email that explains the need for the technology and shows how customers wanted to use it outside dotCloud’s cloud:

In running the dotCloud PaaS, we had a large number of customers creating a large number of applications using, in our case, a fairly large number of different “stacks” that ran on our shared, hosted infrastructure. To some extent, this is a small version of the “matrix from hell” that I described, where you have large numbers of applications, languages, and frameworks that need to run efficiently, stably, and securely across large numbers of different servers. We used the container-related technology that ultimately evolved into Docker to ensure that we could manage this environment.

As we ran dotCloud, it became clear that customers wanted, not just a large number of different stacks, but the ability to use almost any stack. And, they wanted to run not only on our infrastructure, but to flexibly move between any infrastructure: public or private, virtualized or non-virtualized, and across their favorite flavor of operating system. And, they wanted to be able to integrate with their choice of adjacent technologies, such as Chef, Puppet, Salt, OpenStack, etc. We knew that no company could deliver such an all-encompassing solution, but that we could enable an ecosystem to deliver it. That was the genesis of Docker.

And now Docker has “succeeded even beyond our best hopes,” Golub said. No wonder the company changed its name from dotCloud to Docker in October.

“I think we’ve hit upon something that is making the lives of developers and system administrators and CIOs and everybody in between just a heck of a lot easier,” Golub said.

The company won’t just keep providing open-source technology. It still provides its PaaS. But next year, it will introduce new ways to make money off of Docker containers.

[aditude-amp id="medium4" targeting='{"env":"staging","page_type":"article","post_id":875343,"post_type":"story","post_chan":"none","tags":null,"ai":false,"category":"none","all_categories":"business,cloud,enterprise,","session":"B"}']

“Generally speaking, containers are built in one place and then they are run in hundreds of other places, so you need a central hub to take that Docker container, push it and then lots of other places need to find it and pull it,” Golub said. A hosted service that could do that, he said, is the top priority. Management tools could help administrators keep track of where containers are running, who created them, and how they’re performing.

The company also would like to bring in revenue through professional services, such as commercial support for those who use Docker containers. Partnerships with companies that sell services using Docker containers could be another revenue source, Golub said.No matter how much money flows to Docker because of the container breakthrough, though, it’s worth pausing for a moment to acknowledge the efforts of a company that contributed to application development in a big way this year.

VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Learn More