Compute Workload Abstraction - Part 2, or Virtualization is dead, long live Virtualization!

Posted by Glenn Augustus on Monday, July 27, 2015

This is part two of a series exploring the topic of compute workload abstraction. In part one we looked at the unsung hero in virtualization, the HAL, and in this part we dive a little deeper and scratch the surface on the potential benefits for applications.

Is traditional virtualization dead, then?

We ended part one asking if containers are a traditional virtualization killer, and in my view they are both complementary and competitive. Sure, there will be some ground conceded, but it simply complies with the evolutionary process of what I call the absorption of differentiation into expectation.

The functions that were once breaking new ground, are now considered as a baseline or ‘table stakes’ for any product, in this case a hypervisor. The beauty of the hypervisor is that it provides a super-HAL, ironing out the differences between the similar but ultimately different hardware types. In turn this clears the way for a northbound stack where the management of differences is a distraction from progress, where the benefits of standardization far outweigh the differentiation that a unique component feature can bring in the short term. Plus, as containers proliferate, the expansive count of full-fat operating system instances subsides and become honed to support right-sized platforms where the VMs are used to provide the variants of kernel, or OS flavor, rather than a decaying concept of being the lowest common denominator for convenient system isolation.

Are containers production-ready?

For some workloads, definitely. For others there is still a way to go. For Google, Twitter, Facebook, almost every cloud-scale operation, containers form the scalable core of their compute landscape. In a wider sense the industry has taken the container platform Docker very quickly into common vocabulary. Its platform is really a consumer, packager and orchestrator of the underlying container tech inside Linux. It is supported by Google Compute Engine, Azure and Amazon as a method to spawn a library of application and quasi-operating system images. Other entrants such as CoreOS with rkt (Rocket) are fast gaining a following, plus a number of existing technologies are out there which play to the service provider market such as OpenVZ, and of course OpenStack which in combination with Magnum aims to solidify an approach for container management – something that a later post will return to.

How does my application benefit?

The obvious one is the dynamic management of scalability. Applications can grow and shrink resources as needed, on demand. Additionally, a large portion of value comes from having a standardized environment. In the same way that the super-HAL from the hypervisor harmonizes the differences in the hardware layer, containerization does exactly this for the runtime environment. More confidence can be drawn that an application will function and operate as planned from the testing inside a dev or QA environment before it is promoted to production. This area alone can remove headaches for deployment teams and avoid substantial costs.

Next Time…

In part three we look at the case for container standards, security and considerations for ISVs.

Don’t forget to comment, get involved and get in touch using the boxes below and reach out direct @glennaugustus


comments powered by Disqus