I’ve always been a big fan of VMWare workstation. It allows me to test any product/project on any OS right on my own workstation. For about a four year stretch I actually ran Ubuntu as a host with an image of the corporate Windows build as a guest which made me quite happy.
But just as much as I liked the hypervisor on my desktop, I was never a big fan of the idea of virtualizing servers. It always felt like trading $1 for 90 cents. If a server has compute capacity of X and the overhead of running a guest is Y (because each guest needs to run its own OS stack on top of the hypervisor), then you’re wasting roughly Y * [# of guests] of your servers capacity on running the full OS stack for each guest.
I suspect it’s for that same reason that the biggest corporate data centers operated by the likes of Google, Amazon, etc. do not utilize vendor provided hypervisor-based virtual machines. Those companies are pretty stingy with their total compute capacity and do not trade $1 for 90 cents.
But how then can a company reap some of the much advertised benefits of virtualization (primarily around isolation and manageability) without wasting its precious compute capacity and spending lots of cash on vendor supplied hypervisor licensing?
Experts (yes, I’m referring to you Kenny Simpson) have for many years suggested that a technical approach based on Linux Containers would probably be a much more viable solution. Simply due to the fact that you could achieve the same isolation and manageability benefits without the need for concurrently running multiple versions of the kernel. In addition you would get a much more realistic view of total resource consumption and be able to more accurately identify system bottlenecks. If you’ve ever shared 2 physical network interface cards with 30 guest operating systems (that each believe that they have those 2 to themselves) you know what I mean.
Enter Docker v 1.0
Released last week and quickly embraced by the likes of Google, Red Hat, IBM, Microsoft, Canonical, Parallels and other major players looks to be exactly that. An entirely open source solution built on top of lxc and libcontainer, written in Go.
The potential game changer is really libcontainer, which is the closest thing to a standard around “containerization” and given its true open source nature has the potential to become the first widely adopted methodology for running and managing containers across a variety of operating systems.
Given these developments there is a strong likelihood that server side virtualization/containerization can become as sensible as it has been on the desktop, and if done right will be far cheaper than vmware-style virtualization is today.