Compute Workload Abstraction - Part 3, or Rules are there to contain the fun

Posted by Glenn Augustus on Monday, August 3, 2015

This is part three in a series exploring the topic of compute workload abstraction, in parts one and two we looked at containers and how standardisation helps to build solid services. In this part we consider some of the wider implications on standards, security and licencing.

Doesn’t it feel time for another standard?

Well maybe yes, as we enter the next phase for abstraction we can execute on 4 C’s which expand out as the Commonality of Capability leads to Commoditization in Computing. As each abstraction layer is commoditized it allows the layer above to implement the ‘southbound’ layers in generic API type models, driving the further commoditization and common capability across providers. One of the interesting areas here, and why a standard may be needed, is how the abstracted component can advertise unique capabilities or services – take for example that a particular container or VM could offer certain scale-up features, or proximity to an existing dataset - would it not be of interest to the domain broker to be able to serve not just generic workload execution engines but ones that support the context of the business request? The open standards bodies that come together are clearly excited in this space – any subject that has a high level of cross technology magnetism such as Docker, will iteratively divide and unite the community very quickly, leaving what has been demonstrated in the past many times as the most generally acceptable solution to move forward, not always the best from a subjective viewpoint, but acceptable.

What about the S word?

Security is one of the terms that can cause shivers down the spine of an organization, and rightly so if you don’t consider it, and consider it you must. Fortunately, with the abundance of compute power and the more structured use of interface protocols, security can begin to be dealt with as a mainstream activity, and less as a black art. Many applications now build security tests into their continuous integration platform, meaning that if the code doesn’t pass then it doesn’t get promoted – of course you are only testing for what you know, but the use of RESTful protocols over known network ports mean that we can shut down huge areas of exposure. For the purposes of communications between two elements of an application stack deployed its containers, the broker could configure through software-defined-network (SDN) and network-function-virtualization (NFV) so that all ports are denied with the exception of the required ports, and more importantly only for the required amount of time. How many times have you come across a security hole that was opened to facilitate ‘that’ transfer, or ‘that’ application and never closed off again? Of course security doesn’t begin and end with the network, but it’s a good place to start.

License to spawn?

If there is a new platform in town, you can be sure that the ISV license policy makers will not be far behind with an interesting way to attempt to protect and invent revenue streams. Which is why open source, and therefore a more liberal licensing model, establishes a foothold in these newer commodity services, along of course with the potential for rapid development. Traditional applications tend to be slow to adopt, not only because of any technology differences, but because they need to scale, train, distribute staff to support such platforms and this is not something that can generally be spun-up on a sixpence so they need to know that there will be traction in the market for such a technology.

Next Time…

In part four I will take a view on the future for bare metal and how some current management techniques will need to be cast into history to truly take advantage of this new technology

Don’t forget to comment, get involved and get in touch using the boxes below and reach out direct @glennaugustus


comments powered by Disqus