This is part of a series of light reading points-of-view to help foster opinion on wide ranging technology topics, and maybe throw in some education along the way. In this series we are looking to provide some insight into the increasingly topical area of workload abstraction. As always, please feel free to comment and get in touch!
How virtualization was just the first step
So we all know where all this virtualization came from, right? It’s a common thought that the sole business case for virtualization was derived from the inefficient use of existing hardware resources, and of course this is part of the story. But the unsung hero of virtualization was the usage of a simplified set of resources from the hardware abstraction layer. This meant that when an application was running in a virtual machine there was a good chance that problems in the operating system were down to an exhausted resource or poor configuration, not down to a bad driver or a kernel fault. The combination of vendor testing and almost instant worldwide feedback created an ecosystem where any significant problems were either removed before release or patched before you had a chance to download it anyway! Not all problems were eradicated, but the reduction in variants at the hypervisor level created a far more standard environment in which operating systems and applications could live in relative harmony.
That sounds great. What’s the problem?
As usual, very few things are perfect, a number of issues remain, virtual machines are still a point of management, and they still run instances of operating systems and therefore retain the right to be different from each other. They all need monitoring, and to a significant degree, the reason they proliferate is that application code (and maybe coders) enjoy isolation. Code likes a sunny beach with no one else on it to do as it chooses. The multi-tier application model is as much to do with isolation as performance. To be able to support the existing and upcoming micro-service architectures, the abstraction of the workload from the complexities of the physical environment is critical to stabilizing the landscape in which these micro-services execute.
So what’s new?
Step forward the new kid on the block, containerization - or at least it’s the kid with the coolest toy on the block, as some of you will know it’s not so new, since in the world of UNIX, containers (or cgroups) have been about for some time. So why is there so much buzz about this now if it’s not new? As with many technologies, it’s about the combination and alignment of disparate components that allow a concept to be realized. The concept is simple - create an environment for a workload to operate that has the performance and security isolation of a VM but not the overall baggage. Make it quick to start (spawn) and quick to clean up on exit. Give it enough of an ecosystem to let it operate fully in terms of libraries (and bring its own). But above all, make it portable.
Next Time…
Is traditional virtualization dead? Application benefits and much, much more… Don’t forget to comment, get involved and get in touch using the boxes below and reach out direct @glennaugustus
comments powered by Disqus