For a while now, I’ve been bothered with the word commodity. Like legacy, greenfield, there are value judgements implicit in the words. When we apply them to technology adoption, they serve as marketing oars to rock the new tech boat, but are not useful when you need a fish for dinner.
And this article on the NYSE community cloud is a great example of why there are no commodity clouds.
The NYSE’s community cloud platform is design to ensure that its customers are treated fairly, and it ensures them that the maximum latency that any user will experience in this data center is 70 microseconds (one millionth of a second) round-trip for any message, O’Sullivan said.
“We guarantee that nobody will have an advantage on the network,” said O’Sullivan. “It’s designed to be a level playing field for trading.
Basically, this compute service comes with a latency service level and a promise that no one gets better latency, thus ensuring a level playing field for traders.
So it’s “level-playing-field-as-a-service;” which is right and ridiculous. Right because that’s the differentiators; ridiculous that I have to pull the *aaS to describe what before I would have simply called “service.”
There was a time when coffee was called a commodity, then Howard Schultz of Starbucks came along, and Peet’s came along, and next, we are all paying $5 for coffee.
As a resident of Austin TX, I got to experience a record setting heat spell and drought this summer. Not to mention, some of the worst forest fires which are yet to be contained. I was fortunate to escape the heat in the last week of August and attend VMWorld 2011.
One of the themes at the conference was Desktop Virtualization -- desktop access through a range of devices and access to cloud-based virtual machines. This is a timely issue in light of the June Cisco Visual Networking Index report which predicted that there will be twice as many networked devices as people on earth by 2015. With the proliferation of devices such as the iPhone and iPad, it is not inconceivable that workers will use the same devices in the office as well as home.
Another theme at the conference was management of servers and desktops including provisioning, ongoing maintenance and automation to meet service level agreements. VMWare’s CTO in his keynote also mentioned that some of their biggest investments are around operations management of the virtualized environment.
Not surprisingly, another theme at the conference was Cloud Computing. Whether virtualization is required for Cloud Computing can be a topic for heated debate. Although virtualization is not an integral part of the NIST definition of Cloud Computing, the resource-pooling characteristic of Cloud Services is enabled by virtualization.
These themes prompted me to revisit a study by Forrester Research Cisco sponsored on the basics of management for Cloud Computing. Although it was aimed at Cloud Management, the basic steps and concepts should be valid for any data center on a journey towards a dynamic, connected world. The paper is titled “Elements of Cloud Service Orchestration”. A closer look under the hood is warranted even though the term “Service Orchestration” has taken a life of its own, with Wikipedia calling it a buzzword. A webcast on the top is also available. What do you think service orchestration means in the context of data center management? I am very interested in your feedback.
Last week I presented and participated at the The Open Group Forum in Austin, TX. It was a great event, with insights into Enterprise Architecture, Business Architecture and Emerging Architectures. There were several breakout tracks in the Forum, including, the most popular -- Cloud Architectures Track. The sessions ranged from connecting architecture frameworks (TOGAF) to Cloud Architectures, to Cloud Architectures development. My session was on “Architecture & Considerations for IaaS Clouds”. This session was more focused on technology aspects of the Cloud Architecture. Also, it could be applied to either an enterprise private cloud or a service provider cloud settings. Just to level set everyone in the audience, I started out with a taxonomy and reference architecture (RA) review. I utilized both NIST’s published and a simplified version of Cisco Cloud RA. The Cisco RA review was the case in point for this session, where Infrastructure, Service orchestration, Delivery/Management and consumer layers were discussed.
This week’s focus on Cisco’s Unified Network Services (UNS) portfolio looks at cloud orchestration and the concept of a Network Hypervisor. What is a “Network Hypervisor”?
In the same way that a traditional hypervisor can offer up a modular, replicable set of virtual server resources (including OS, CPU slice, network interfaces), a network hypervisor is a modular abstraction of reusable network services to assemble a flexible data center or cloud infrastructure. Sounds interesting so far, but what does the network hypervisor actually do?
The first function is to allow organizations to pre-define and replicate the modular network containers that abstract a rigid underlying network infrastructure from the needs of individual applications and services. An example of a network container might be defined to include individual components such as logical VM ports, load balancer and firewall. This logical network environment can be assigned and isolated to a particular tenant to provide the network services a particular application needs and where the application VMs can be placed. The figure below shows how some modular, pre-defined containers can be nested and plugged together to offer customized services for a particular tenant. A small number of defined containers can be replicated and plugged together in a large number of permutations to address a wide range of application requirements.
These flexible, pre-defined containers can be device agnostic, just like their server counterparts, and help provide security and quality of service through tenant isolation, as well as application resiliency. During the application and VM provisioning process, the defined network containers advertise their capabilities and are deployed along with the VM in the proper locations. Just like the VMs they are aligned with, the network containers are location-independent and handle all the changes required during VM-mobility, ensuring that the application has the same network services in the new location. Obviously this goes well beyond just the layer 2 and 3 networking services, through to the layer 4-7 application services like load balancing, WAN optimization, and security as mentioned earlier.
Today Cisco announced a new strategic alliance with BMC and introduced the Integrated Cloud Delivery Platform (ICDP) solution to give customers an option to easily deploy end-to-end Cloud services on a large-scale multi-tenant Cloud computing infrastructure that spans networks, computing systems, storage, and applications. ICDP increases the scalability of Cloud computing environments for our Service Provider and other large-scale multi-tenant clouds by automating and simplifying the service orchestration and management of their service portfolios.
This alliance extends Cisco’s ecosystem of partners in the Cloud space. This move builds on the relationship between our two companies: Cisco and BMC have worked together on 140+ customer engagements, combining BMC’s BladeLogic and our Unified Computing System (UCS). ICDP integrates BMC’s Cloud Lifecycle Management (CLM) solution with Cisco’s Unified Service Delivery (USD) solution to simplify the management of delivering high-scale, secure, and multi-tenant Cloud services. Combining CLM with Unified Service Delivery infrastructure allows the support for end-to-end lifecycle management of Cloud computing-related initiatives with seamless integration of the planning, provisioning, assurance, compliance, and governance while increasing the quality of ongoing Cloud service delivery.