Virtualizing for Resiliency

This was originally written on March 6, 2006.

—————————————————–

A colleague sent this article on virtualization to me today. This is not the first virtualization-related piece of information to come across my desk today either. There are also calls to customers, calls from vendors, and other pleasantries. The main point of the article talks about different strategies for increasing “yield” from a cubic foot of data center space.

The comparison to agriculture is apt, I believe, since for our society information generation, storage, and retrieval mirrors the concerns of agricultural societies in years and millennia past. Data Centers are our fields and granaries, and the network is the road between our towns, those fields, granaries, mills, and bakeries – replaced by online communities, data centers, SANs, database and application servers, and web servers respectively. What data center managers are going through now is similar to the “closing of the frontier” thesis by Turner.

As a result of the closing of the frontier, several significant changes occurred. As the availability of free land was basically exhausted … At the closing of the frontier, we entered a period of concentration — of capital, as with monopolies and trusts — and of labor, responding with unions and cooperation.

We can theorize, that as opportunity to add thousands of square feet of space for data center use becomes exhausted, people actually have to turn to concentrating – or consolidating – their resources for more productivity. Similarly, we power expenditures for running the CPUs and the disks, and power to cool them as well rising proportional to the density and amount of used space, and rising again as the cost per unit of power has increased by 50 or more percent over the last 2 years, managers better be getting something worthwhile from all those boxes. Suddenly, it is not longer possible to just “add a box” to a rack. Like modern agriculture, the “yield” from all these machines must be watered with power, and fertilized with efficient allocations and management.

What does this all have to with resiliency? One can generalize that a funny thing is taking place. As infrastructure and servers themselves become virtualized, a specific “machine” (if that term can be applied to a virtual machine running on a multicore, multiprocessor server with OS partitions using virtual CPU allocations over a virtual network and disk I/O) will be transparently managed for service levels and failover. Difficulties of setting up clustering and multi-site failover will be left far in the past – except that new issues will take their place. An individual machine, or even data center may become non-critical – but all the nice virtualization management software and hardware will become extremely critical. As “yields” increase, all applications will be considered critical – which means a new set of policies for determining service levels will need to be created.

I see an increasingly close relationship between data center owners, application level specialists, and application management and user base. Some or all of these entities might be partners of each other or of the end-user community. Hybrid installations with one full data-n-application set being hosted by the “client”, and failover and backup sets being hosted by various partners will probably become the norm. The concept of “insourcing”, as described by Yossi Sheffi in his book “The Resilient Enterprise” will become more and more the norm.

As I mentioned in a previous post about SaaS – disaster recovery and contingency policies will increasingly have to deal with service level agreements and resiliency among partners and suppliers as a core the recovery and resiliency . Virtualization holds a great promise for improvements in operational efficiency and enterprise resiliency, but a thorough adjustment of policies and expectations needs to take place before these gains can be realized.

This entry was posted in Resilience. Bookmark the permalink.