According to GigaOM, the use of cloud-based resources will be what’s “next” for IT in preparation for an in-depth look at the infrastructure that will drive the next decade of application development.
At the recent Structure event, GigaOM tapped into the minds of cloud-technology industry leaders, seeking insight into the “Top 5 Questions for the Titans of Cloud.”
In this post, Gee Rittenhouse, Vice President/General Manager, Cloud and Virtualization Group at Cisco, provides answers and insight on cloud infrastructure, exchange, data security and more.
Top Cloud Question #1: “When will all the major clouds support the same set of APIs?”
Today, there is a three-horse race between two proprietary APIs (Amazon Web Services and VMware’s vCloud API) and one open API (OpenStack). For now, the two proprietary APIs will continue to be the dominant players, leveraging their large public cloud (in the case of AWS) and private cloud (in the case of VMware) deployments.
But, as an increasing number of service providers and enterprises adopt and deploy OpenStack cloud solutions across both public and private models, the balance will shift, more than likely over the next two to four years.
Cisco’s approach is different from other, more infrastructure-centric public cloud offers. We believe that the open API model OpenStack will eventually be the dominant cloud API model and will ultimately become the de-facto standard.
Looking to the future beyond just a hybrid cloud conversation toward the Intercloud, an interconnected global cloud of clouds, built with a commitment to open standards and based on OpenStack, will feature APIs to connect any cloud or hypervisor to any other cloud or hypervisor.
A new and innovative architecture? Perhaps, but that is only part of the story.
A unique, compelling management paradigm that sped and simplified tasks, while promoting collaboration? Potentially, and definitely part of the formula as well.
The real story is People. People buy technology to do work that needs done. People have to think ahead, they must understand what will be needed and then decide on a path, on a partner (still more people) to develop and deliver the technology they need. [I had a bunch more “people” in here but it was getting really ridiculous, instead of only slightly ridiculous.]
Real people, not real stories, making real decisions every day chose the technology that meets their needs, now and in the future. They decide what works and what does not.
So why UCS? There have been a lot comments about UCS over the years that have resonated with me on this very question. I wanted to share two that seemed most on point right now. It is a little bit of “then and now” since they are two years apart, but it felt right and the sentiments are remarkably similar.
“…Unlike other server vendors, Cisco’s UCS launch was from a fresh-fields approach that recognized the industry’s shift towards server virtualization and consolidation. Not tied down by legacy architectures…” – Cisco UCS – Undisputed Computing Success, March 2012, ZD Net, Archie Hendryx
“Five years ago…Cisco Systems launched…UCS…into the gaping maw of the Great Recession…Recessions have always accelerated transitions in IT architecture…in the favor of upstarts with new ideas and against incumbents who are set in their ways…” – Five Years On, UCS Makes Cisco A Systems Player, April 2014, EnterpriseTech, Timothy Prickett Morgan
“…upstarts with new ideas…” -- sounds like a pretty fair summary.
So where do UCS Customers see real benefit? I’d rather they tell you their real story:
Look at the operating costs for your data centers and you’ll likely see a big amount for the electrical power to run the servers, storage, networking components, and cooling systems. Since power consumption is an area where even small changes can add up to big savings over time, we want to take advantage of every power-saving feature we can find. And we’ve found many of those features in the Cisco Unified Computing System (UCS) servers, which we now deploy as the standard in our data centers worldwide. Read More »
Uptime Institute recently celebrated the winners of their third annual Server Roundup contest. The contest was launched to spotlight the amount of resources that can be recovered and the amount of waste reduced by decommissioning outdated and underutilized servers. While the results are impressive, the process for identifying these servers was difficult and labor intensive.
Barclay’s decommissioned 9,124 servers, resulting in savings of 2.5 MWh of energy ($4.5M in power costs), roughly 5,000 Tons of carbon emissions, and $1.3M in legacy hardware maintenance costs, and reclamation of 588 server racks.
Sun Life Financial decommissioned 441 servers, resulting in savings of 115 kWh of energy ($100,000 in power costs), roughly 330 Tons of carbon emissions, and reclamation of valuable space in the data center.
All of the 2013 winners finalists shared that they decommissioned between 10% and 40% of their initial servers, and expressed the same sentiments: The cheapest data center is the one you never build. Decommissioning obsolete servers is “free Money”. Make the best use of your space by getting rid of stuff that isn’t being used.
When the 2013 winners were asked what software they used to identify the decommissioned servers, responses varied from tracking via excel spreadsheets and SMDB databases, to polling servers’ back end data with DCIM, to hiring college students to conduct a 3 to 4 month manual Book to Floor audit, then another several months to manually map the applications to the servers using them.
I applaud all of the hard work and great results these companies have achieved, but imagine how much more efficient they could be if they were leveraging Cisco EnergyWise Suite’s ability to deploy in a matter of hours and:
Automatically discover every device that it attached to the network in real time
Gain visibility into the energy consumption and utilization of 100% of the devices in the data center
Identify energy-inefficient devices
Monitor, measure, and manage the energy used by their network-connected devices, regardless of device type or manufacturer
Optimize virtualized and cloud computing environments
Create policies that automatically and remotely manage power for network-connected devices to cut energy costs