I have a question: Does anybody really know what Docker does?
I’m sure many savvy programmers know this programming architecture and platform well. But I bet there are also a lot of you out there, like me, who have only become familiar with this Docker thing in the explosion of media hype. So let me tell you want I’ve learned so far.
I confess, up to a few months ago, I had no idea. Containers, previously an esoteric platform for distributed apps relegated to the backwaters of programming culture for a decade, suddenly became famous overnight. Now we’re being told that Docker will take over the world, put monolithic systems out of business, and solve world hunger.
When I launched project “Understand Docker” a few months ago, I was thinking: Docker can’t be perfect. There’s got to be some issues. Otherwise I’d advertise on Craiglist for a Docker Dude, hire him, and suddenly I’d have a company with a billion-dollar valuation. Maybe next year.
To bone up on Docker, I read Hacker News and stumbled across DevopsU, where there are lively debates about the applications and pitfalls of Docker. I went through some Docker documentation (I kind of like saying that. “Docker Documentation, Docker Documentation.”) I talked to some leading venture capitalists and real systems engineers. Sadly, I spent hours reading about Docker in a variety of hacker journals and blogs so I could understand exactly what it is. (Isn’t the Internet fun?)
On the highest level, here’s what I found is the most interesting thing about Docker: Docker represents operating-system-level virtualization. What VMware, KVM, and other virtualization platforms have done for hardware, Docker has the potential to do for operating systems (OSes). It allows applications to become more distributed by offering OS-level portability.
Steve Herrod, partner at General Catalyst and former VMware CTO, helped me to understand this. (I’ll be publishing the full interview with Herrod on Rayno Report.)
“Docker is building on operating-system-level virtualization,” says Herrod. “You’re talking to an operating system. That has pros and cons to it. On the pro side, it takes a very application-centric view of the world; you are skipping a lot of the layers and it can be lighter weight. On the flip side, it needs to be a certain type of Linux to take advantage of the system.”
I’ve also noticed that Docker’s got some issues. It represents lots of new small containers of code running around the cloud. I like to think of them like all the NFL running backs cast off by New England Patriots coach Bill Belichick. They’re homely talents looking for a home — but in need of careful guidance and coaching in the right system.
Software vagabonds in the cloud — or NFL running backs — need to be watched. For example, you might throw a illegal instruction error after you think you’ve done everything right. Or you may find that Docker’s base Ubuntu 12.04 image has not applied any patches. Oh dear!
As with any programming platform, of course, you have to be very careful about security and how the new code can interact with the system and open up security holes. This is from the official Docker documentation:
“Changing the default docker daemon binding to a TCP port or Unix docker user group will increase your security risks by allowing non-root users to gain root access on the host. Make sure you control access to docker.”
Wow. Also, says the official Docker Engine documentation, remember that if you are binding to a TCP port, anyone with access to that port has full Docker access, so it is not advisable on an open network.
In other words, be careful when you are fooling around with Docker in your spare time, you might open up the entire network to a bunch of angry, drunk Russian hackers.
All of this comes with the territory of “container” approach, or the “microservice” genre that Docker runs in. Benjamin Wootton, CTO of Contino, a London based consultancy specializing in applying DevOps and continuous delivery to software projects, has written a popular outline of the downsides of microservices, called, “Microservices: Not a Free Lunch!”
Wootton runs through some of the microservice drawbacks: Processor intensity and overhead, need for monitoring, and the demands on operational support. An IT manager needs to think about the fact that you now will have to manage dozens, if not hundreds, of new pieces of wandering code. By creating a more highly distributed system, you also create more complexity.
“Where a monolithic application might have been deployed to a small application server cluster, you now have tens of separate services to build, test, deploy and run, potentially in polyglot languages and environments,” writes Wotton.
In other words, you have to be careful throwing your Docker around because it might not speak the same language as everything else in your system.
“You don’t need to Dockerize everything,” writes Wootton.
I agree. I’m tired of Dockerizing everything. It might be time to scale back those VC valuations, get rational, and check your Docker mania. Not everything is perfect. Docker is an innovative architecture, but it is yet another software programming technique, requiring new understanding, management, and controls. It’s a big, valuable trend, but let’s not get carried away.