Cisco Blogs


Cisco Blog > Data Center and Cloud

Introducing Cisco Intelligent Automation for Cloud – Version 3.1

Just the other morning, my 3.5 year old daughter said “Daddy, can you make me a waffle?” And like any self-respecting parent, I of course responded with “Poof. You’re a waffle.”

It reminded me of something we frequently hear from customers: they effectively ask us to “make my data center a cloud.”  Now we could wave our arms and say “Poof. It’s a cloud.” But it’s not that easy.  Despite what some cloudwashers may say, virtualizing your data center does not mean you have a cloud – and self-service provisioning of VMs is not cloud computing.  Real clouds require much more.

Fortunately, we have solutions to help our customers deploy real clouds – with market-leading compute, network, and management products in our Unified Data Center portfolio as well as our cloud enablement services.  In fact, today we introduced yet another innovation in our Unified Computing System (UCS) portfolio with Cisco UCS Central.

I’m pleased to also announce the latest release of our cloud management software solution today: Cisco Intelligent Automation for Cloud version 3.1.  This release introduces several exciting new features, and I’ve highlighted a few of these new product capabilities below.

Virtual Data Centers – In simple infrastructure-as-a-service use cases, virtual machines and other resources may be provisioned from a shared pool of resources on-demand.  In more advanced infrastructure-as-a-service use cases, virtual data centers (VDCs) can be established to provide project teams or departments with a dedicated resource pool of compute, storage, and network capacity for their own organization. I’ve written in the past about this concept of a virtual data center and this is what Cisco IT deployed for our own internal private cloud.

Read More »

Tags: , , , , , , , , , , ,

IT “Patients”

When you think of cloud technology and data center virtualization, you likely think of big corporations managing their data centers and IT infrastructure to drive business forward. But that’s not the only sector benefiting from the cloud. The healthcare industry is confirming that virtualization is an important factor in the well-being of people -- technology being used to help save lives, not just increase revenue.

Updating an IT infrastructure with cloud enablement is impacting the medical world in imperative ways. Through the cloud, clinicians are able to access medical records and information from a multitude of devices, and from anywhere. Never being out-of-range in an emergency situation is a huge step for healthcare. It means less physical hardware, easier access, shared information, and better service for the patient.

Consider St. George’s Healthcare NHS Trust, a leading healthcare provider. After experiencing difficulties in accessing information and maxing out resources, St. George’s made the move to Unified Communications. Doctors and nurses are now able to retrieve information from the device of their choice, enabling quicker response to patients’ needs, all while meeting new government regulations and controlling their budget.

Other examples include Sparrow Health, who strived to be a national leader in quality and patient experience. With virtualization and cloud-based applications, Sparrow achieved a medical-grade network that solved the problems of their former, unreliable IT system. And Seattle Children’s Hospital severely cut back on wasted time in accessing information and managing their systems by bringing nearly 400 servers and 5500 workstations under central management using virtualization. Likewise, Cook County Health and Concentra are all healthcare providers who reaped the benefits of a virtualized, unified network.

For these profiles and more information on utilizing the cloud to increase ROI and improve TCO, visit UnleashingIT.com.

Tags: , , , , ,

OpenStack, Cisco ONE and You

October 16, 2012 at 8:40 am PST

So, with our announcements around OpenStack this week a few folks have asked me how OpenStack fits into our broader strategies like Cisco Open Network Environment. The short answer is “quite well, actually”, the longer answer follows. :)

If you look back our original introduction of the Cisco Open Network Environment, we made a couple of points—there is a plurality of use cases and as a result, there need to be a plurality of enabling technologies. While there are common objectives such as agility and programmability to better handle the macro trends like cloud and virtualization, the truth is, everyone has their own design objectives and priorities. To that sentiment, I might add that folks have varying operational objectives and priorities—the appetite for the amount of risk and complexity they want to take on.

With the three-pillar structure of the Open Network Environment, we feel like we have given folks the flexibility to choose the right technologies for the job. With initiatives like OpenStack we now support a different kind of flexibility.

While a segment of the market seems to want to start writing their own protocols and hand-wiring flow tables, a different segment of the market is moving in the other direction, expressing a desire to get out of the infrastructure business and focus their time and efforts on their apps and their users—this has traditionally been the Vblock and FlexPod crowd. With OpenStack, they now have another option—they get the programmability we talk about with the Open Network Environment, but at the stack level, instead of at the box level. The idea behind something like the Cisco Edition of OpenStack is simplify the task and reduce the risks of standing up a cloud stack. You have the full Folsom release of OpenStack, some Puppet recipes to simplify deployment and validation against the relevant Cisco hardware (follow that last link for details).

To get more insights into our OpenStack announcements this week, check our this blog by Lew Tucker, our VP/CTO for Cloud Computing and this post by Kyle Mestery, one of the many Cisco folks who has invested a great deal of time and effort in OpenStack.

One final thought. We are a long way from being done yet. In just the last few days, I blogged about how our Virtuata and vCider acquisitions fit into a multi-cloud strategy, we have had the aforementioned posts related to this week’s OpenStack announcements, and Rodrigo Flores just posted about our Multi-Cloud Acceleration Kits for our Intelligent Automation for Cloud solution. While cloud is the destination, there are many ways to get there as we have customers and we will continue to innovate and partner on a number of fronts and in a number of ways that will likely surprise some folks. Stay tuned.

Tags: , , , , , , ,

Virtuata and vCider: Next Steps to Building a World of Many Clouds

October 12, 2012 at 11:53 am PST

One of the things that has always been clear to us is that a pragmatic cloud and virtualization solution is going to need to embrace diversity.  There were going to be many paths to cloud and customers would want the freedom to choose to host workloads on physical infrastructure, any of the hypervisors available or one of the emerging number of cloud options.  This realization has been one of the factors that has shaped our strategy for delivering practical solutions for virtualization and cloud to the market.

Cloud Networking: Multi-Hypervior and Multi-Service

Initially, we focused on physical/virtual consistency and separation of duties.  We kicked this effort off with the Nexus 1000V, which was a fully functioning NX-OS switch rendered fully in software.  With L2 handled, we moved on to deploy virtual services consistent with this physical counterparts like the ASA 1000V, the Virtual Security Gateway (VSG) and vWAAS. Finally, we fleshed out the networking stack with the Cloud Services Router (CRS 1000V).

The network has always been a platform for enabling heterogeneous OS and heterogeneous applications to connect. Naturally, the next step was to take the capabilities we had built and extend them across multiple hypervisors so we could now deliver a consistent experience for customers with heterogeneous hypervisor environments.  We built on our success with over 6,000 enterprise and service provider VMware vSphere customers and are now extending those came capabilities to Microsoft Hyper-V environments as well for Xen and KVM open source hypervisors. With the recently announced shift to a “freemium” pricing model, with the Nexus 1000V-Essential Edition, customers are gaining these benefits with minimal cost and risk.

vCider and Virtuata: Opportunity for Secure Multi-cloud Networking

However, some of the most interesting progress has come from our two of our more recent acquisitions that have been centered on the concept of providing better operations and management of multi-cloud environments.  As customers more broadly adopt cloud and virtualization, security and isolation at the VM level become of paramount importance. To address this need we acquired Virtuata this summer. The Virtuata technology will give us (okay, you) the ability to have sophisticated and consistent security for VMs across multi-hypervisor and multi-cloud environments.

Read More »

Tags: , , , , , , ,

Upcoming Cloud Computing Open Source Conferences

In case you missed it, Cloud Computing is hot right now. Has it peaked? That depends on who’s articles you read. Maybe along those lines, Gartner is arguing that cloud washing is coming to an end, and customers are now making more informed decisions. Regardless of if the hype cycle is over or just beginning, one thing which remains constant is the use of Open Source Software in Cloud Computing. Look no further than projects such as OpenStack, CloudStack, and oVirt to see the past, present and future of Open Source Cloud Computing platforms. If you’re serious about deploying these technologies as part of your infrastructure, you should note the following events coming up which can help you explore the technologies at a venue with the people who helped create each of them.

  • The OpenStack Summit is coming up the week of October 15 in San Diego, CA. This event will showcase both vendors and users of OpenStack technology. But the real treat for developers and DevOps folks is the design portion of the Summit. This allows developers of OpenStack the chance to plan features for the upcoming “Grizzly” release, slated for spring of 2013.
  • CloudStack will have it’s CloudStack Collaboration Conference November 30th to December 2nd in Las Vegas, NV. This event is a chance to get familiar with CloudStack and attend sessions detailing the technology underlying CloudStack, as well as user focused sessions detailing deployments of the Apache CloudStack project.
  • The upcoming KVM Forum will be collocated with the oVirt Workshop. The event takes place in Barcelona, Spain November 7-9. This event is a great chance to gather more information about oVirt, specifically about the future direction of the project, as well as sessions on deploying and using oVirt.

Each of the events listed above is a great way to get a better understanding of your Cloud Computing software of choice, and to engage with developers, users, and vendors around the software. What Open Source Cloud Computing events are you looking forward to attending?

Tags: , , , ,