Building Clouds and Implications

Pages: 1 2

Cloud Computing Update

A little while back, Intel flew a number of journalists out to Portland, Oregon for an update on their cloud computing efforts. Various ecosystem partners that play a role in Intel’s cloud computing strategy were present, including VMware, Citrix, Netapp, EMC and others. The overall focus of the event was making the benefits of cloud computing more widely available to end customers. While cloud computing is typically more of a business and market phenomenon, there were also some very interesting implications for the technologies that go into modern microprocessors and systems.

Currently, most cloud computing is internally developed at large internet-centric companies like Amazon, Google and Microsoft, but made available through various services such as EC2, Gmail and Live. These companies can dedicate considerable resources to their IT infrastructure, by virtue of their tremendous scale and then make such infrastructure available to the public. Longer term, cloud computing must harmonize and co-exist with traditional IT (the so called “private cloud”) to satisfy regulatory requirements and the needs of customers. This is challenging because internal IT groups at many traditional product or service companies are much more resource constrained than a company like Facebook or Google, where computing resources are so essential.

Cloud Builders is an industry initiative (led by Intel) that focuses on making cloud computing more available to the broader IT community. This entails lowering the lowering the cost and complexity and facilitating interoperability between public and private cloud infrastructure. The obvious beneficiaries are IT departments and smaller enterprise vendors, which tend to lack the resources or expertise of a leading OEM like IBM or a savvy customer like Google.

One of the focuses of Cloud Builders is creating flexible reference architectures and guides that are widely available. These reference architectures emphasize the considerable challenges that arise from putting disparate servers, storage, networking and software into production. As with most engineering, the complexity comes from interaction effects. For example, HP validates their servers to function correctly and ensures that management is easy, but they probably do not have comprehensive end-to-end testing for all the combinations of 3rd party hardware and software that customers wish to integrate. Even if such testing did exist, there is often a lack of documentation to setting up a system.

The objective of the reference architectures is to tame the complexity of IT integration by pre-validating various solutions and providing implementation guides and best practices for IT staff to follow. The reference architectures are meant to be a flexible starting point, so that they can be customized to suit specific needs. Appliance-like implementation and ease of use is certainly a goal, but not at the expense of requiring a ‘one size fits all’ approach or sacrificing many of the benefits of cloud computing (e.g. reliability, flexibly scaling workloads up or down).

To some extent, this creates more competition for pure cloud computing like EC2, but that is an overly simplistic view. The reality is that some workloads are too critical to put in the hands of an outside provider – or regulations may preclude this option. For instance legal, medical and financial data are quite sensitive and may have restricted access rules that prevent using a third party. Moreover, companies with large scale infrastructure tend to have cost advantages derived from volume economics, which a reference architecture will not change.

Easy interoperability between private and public cloud computing will inevitably become a customer requirement to avoid catastrophic lock-in. Eliminating concerns about proprietary lock-in will actually serve to expand the market. For an IT department to be able to flexibly shift between internal and external resources is incredibly attractive and makes it easier to test the waters. It also creates an opportunity for cloud vendors to supplement existing resources. For example, the IT department for a heavily seasonal business (e.g. online flower sales) probably wants sufficient internal infrastructure to handle the normal steady-state flow of business. But at especially active times (e.g. Mother’s Day and Valentine’s Day), they could tap into a cloud vendor to handle the peak capacity as more orders stress the IT infrastructure. Many businesses will balk at the notion of shifting their entire order process to the cloud to gain that flexibility, but would happily offload a portion to meet varying demands.

Pages:   1 2  Next »

Discuss (18 comments)