For many months now, we’ve talked about the Journey to Cloud Computing and how an evolution within your Data Center is needed to make that a reality. In many cases, we looked at this from an application perspective, focused on the interaction between automation, applications, servers, storage and the edges of the network.
But many of you have asked us to provide you a broader understanding of the role the Network plays in the Journey to Cloud Computing. Specifically you’ve asked us to highlight several areas:
What is Cisco’s perspective and strategy around the usage of multiple types of Cloud Computing (Private, Public, Hybrid, Community) and what is needed from the network to interconnect all these offerings?
How does my business manage the network transitions needed between today’s applications (often client-server), the virtualization of those application, and next-generation web and big data applications?
What considerations do we need to make within my Data Center as we try and maximize efficiency and scalability?
What considerations do we need to make at the edges of our networks when the proliferation of devices is almost out of control?
Are there ways to protect my network investments while still having the flexibility to deal with the business uncertainties that are around the next corner?
After just getting back from a great week at Cisco Live 2011, I wanted to highlight one of the demonstrations that garnered a huge amount of attention from attendees (customers & partners). This is from our CITEIS project, which is Cisco’s internal Private Cloud.
This demonstration highlights a number of unique Cisco Data Center technologies, along with partner technologies:
Yesterday the Cisco Live! Las Vegas show concluded and it’s been quite a week. As William Shatner brought in the hilarious yet inspiring closing keynote in the afternoon, I’m looking forward to absorbing all we have heard from customers, analysts, and the press back in San Francisco. Like space exploration, we have not yet seen or predicted all that will change with Cloud.
Staying with the theme this week, I additionally wanted to thank you for your answers to my request for more public references and emailing me with new Cisco Service Provider references built with Unified Service Delivery with Vblocks and FlexPods. Here’s a few:
Over the last few months, the big trend in Cloud Computing has been a dramatic shift from “talking” to “building”. Companies in every industry are taking the next steps to deploy their strategies to deliver more efficient IT services for their business, with the goal of delivering the services in the best possible manner regardless of the source (Private Cloud, Public Cloud service, Hybrid capabilities).
But companies looking to deploy Cloud Computing or expand their existing footprint face several challenges:
How to deal with on-going support for legacy applications (such as this or this) while beginning to deploy new virtualized or cloud-based applications?
How to ensure consistent levels of Security, Auditing, Compliance, and Quality of Service across the range of applications (old and new)?
How to build out Cloud Computing infrastructure in a way that is consistent and able to easily grow as demand grows?
Wow! Lots of outrage over the colossal cloud computing outage at Amazon! With big sites such as Reddit, Foursquare, and Heroku taken down by the issues with Amazon Web Services (AWS), there’s brouhaha brewing about a black eye on Amazon—and the entire cloud computing industry.
“The biggest impact from the outage may be to the cloud itself,” said Rob Enderle, an analyst with the Enderle Group, in ComputerWorld. “What will take a hit is the image of this technology as being one you can depend on, and that image was critically damaged today…If the outage continues for long, it could set back growth of this service by years and permanently kill efforts by many to use this service in the future.”
So the cloud might be a little beat up, but is cloud computing dead? Not even close.
Cloud computing is here to stay, not only because the model is more efficient and more cost effective than the traditional IT infrastructure, but because it promotes the promise of specialization—a value that gives companies an edge and consumers a better product.
What’s AT&T Got to Do With It?
Remember the days when AT&T was the only phone company around, and their phone was the only one you could buy? First it was rotary, and then it was push-button. AT&T made every single part of the phone. It made the screws that held the phone together. The whole machine was incredibly durable, but it was also heavy, clunky, and incredibly inefficient—not to mention expensive.
It didn’t stay that way, however. Boom! Deregulation hit the industry and the price of a phone went from a hundred dollars to a hundred pennies. Everything changed, and today we see the result: throwaway phones. Now phones are ubiquitous, they’re incredibly inexpensive, and they can do more than ever before.
IT infrastructure is moving down the same path. Until now, every company has built its own expertise into its proprietary IT systems. Every company has been (metaphorically speaking) fabricating its own screws, making its own hammers, and toiling over its own infrastructure. There’s been massive duplication of efforts, and the approach is filled with gross inefficiencies.
Now that’s all changing with cloud computing. It has gained rapid adoption exactly because it recognizes the inefficiencies and complications of traditional IT infrastructure, which is built on large, complex systems that require specialized skill sets to implement and deploy. The most interesting form of cloud computing is Infrastructure as a Service, or IaaS. Instead of tilting up the servers and fabricating the screws yourself, you look to a specialist—a large service provider with a deeper level of expertise, greater economies of scale, and the ability to provide the infrastructure on which you can run your apps. Another upshot: by removing a massive noncore task from the organizational to-do list, a new wave of efficiencies and innovation can be unleashed. (Pretty soon, traditional security will look no different from that rotary phone I saw on eBay for $9.99: a charmingly clunky reminder of a long-gone era.)
Build a Plan, Don’t Pray for Perfection
Cloud computing—or anything in computing—is not perfect. Data centers, whether they are public or private, go down. Outages happen in-house as well as to the industry’s leading cloud-hosting providers.
What we must all recognize is that we need solutions to better insulate companies against inevitable outages. The question we should be asking is not how can we trust the cloud, but rather how can we make enterprise applications more robust? What should the failover plan look like? (Because things fail.)
The answer is portability. We must have the ability to move apps from one infrastructure to another so that if one bursts, the whole world doesn’t come to a screeching halt. That’s Internet 101. Instead of just one web server, have two web servers in different locations and roll the load between them. Contingency plans that included having two data centers from two different providers and different availability zones kept sites such as the business audience marketing platform company Bizo running during the Amazon outage. By similarly designing systems that took potential failures into account, Netflix was largely unaffected.
The current tools available for virtual data center don’t provide good portability and rollover ability from private to public data centers. Technology vendors need to address how to move a data center workload from one cloud computing provider to another, so they can provide the resiliency and efficiency needed to deal with the occasional bad hair day. With that investment we’ll all come out looking a lot better.