Cisco Blogs


Cisco Blog > Data Center and Cloud

Simplifying Cloud Infrastructure Deployments with Cisco Common Cloud Architecture built on Cisco UCS and OpenStack

In three short years OpenStack has become cloud management platform that is “Too Big to Fail” (according to Citi Research). Whether it is true or not, OpenStack is definitely gaining traction and is making a profound impact not only as a viable Cloud management option, but also on the software economics for Cloud solutions.

Cloud computing is rapidly transforming businesses and organizations by providing access to flexible, agile, and cost-effective IT infrastructure. These elastic capabilities help accelerate the delivery of infrastructure, applications, and services with the right quality of service (QoS) to increase revenue. Cisco’s approach—innovative and unified data center infrastructure that provides the underlying foundation for OpenStack technology—enables the creation of massively scalable infrastructure that delivers on the promise of the cloud.

Cisco Common Cloud Architecture built on Cisco Unified Computing System (UCS) with OpenStack provides the foundation for flexible, elastic cloud solutions enabling speed and agility.  As the saying goes “Every Skyscraper is built on a strong foundation of pillars”, the OpenStack platform requires the core requirements from the underlying infrastructure – simplification, rapid provisioning, self-service consumption model, and elastic resource allocation. Cisco UCS uniquely provides a policy based resource management model, which simplifies by integrating compute, networking and storage with the ability to scale and automate deployment.

This foundation addresses every stage of cloud deployment be it private or public cloud offerings. Some of primary workloads targeted for OpenStack based deployments are:

  • Self-service development and test environments
  • Massively scalable software-as-a-service (SaaS) solutions
  • High-performance, scale-out storage
  • Web server, multimedia, big data, and cluster-aware applications
  • Applications with extensive computing power requirements and mixed I/O workloads

To accelerate these cloud infrastructure deployments, Cisco has developed starter configurations focused on compute-intensive, mixed or heterogeneous and storage-intensive workloads. The various server nodes are typically sized to include the OpenStack controller, compute, Ceph storage, Swift proxy and Swift storage.

 

Cisco UCS Solution Accelerator Paks for Cloud Infrastructure Deployments

Screen Shot 2013-10-31 at 10.12.12 AM

Scaling beyond 160 servers can be implemented by interconnecting multiple UCS domains using Nexus 3000/5000/6000/7000 Series switches, scalable to thousands of servers and to hundreds of petabytes storage, and managed from a single pane using UCS Central in a datacenter or distributed globally as shown in figure.

Read More »

Tags: , , , , , ,

Cisco UCS Director is Nominated for a ‘Storage, Virtualisation and Cloud (SVC)’ Product of the Year Award!

October 24, 2013 at 4:55 am PST

I’m happy to report that Cisco UCS Director (formerly Cloupia) has been selected as a finalist for the 2013 Storage, Virtualisation & Cloud (SVC) Awards!  Please take a moment and vote for UCS Director at http://cs.co/SVCAward.

SVCThis finalist nomination recognizes the innovation and differentiation that Cisco UCS Director provides for end-to-end converged infrastructure management — including automation for both virtual and physical resources across compute, network, and storage.

The video below provides a good overview of Cisco UCS Director and its benefits for IT organizations:

 

The sweet spot for Cisco UCS Director is in managing converged infrastructure based on Cisco’s Unified Computing System (UCS) with Cisco Nexus switches and third party storage — focusing on our market-leading integrated systems including the FlexPod solution with NetApp, as well as VCE’s Vblock Systems and our VSPEX solutions with EMC storage.

But the beauty of Cisco UCS Director is that it can also manage heterogeneous environments, including non-Cisco infrastructure and multiple hypervisors. Whether you call it your single-pane-of-glass or one ring to rule them all, it’s a highly innovative and comprehensive infrastructure management solution for your data center operations.  These capabilities and more are highlighted in the award nomination which you can read here.

Read More »

Tags: , , , , , , , , , , , ,

#EngineersUnplugged S3|Ep13: Software Defined Storage Continued!

September 18, 2013 at 10:42 am PST

Welcome back to the final episode of Engineers Unplugged, Season 3! It’s been quite a ride. This week, we take another viewpoint on the hot topic of software defined storage with Mike Slisinger (@slisinger) and Vaughn Stewart (@vstewed). Starting from the application owner’s perspective, this is a great 101 on the choices made on the road to the data center of the future. Let’s listen in:

Better stick to storage, not unicorns! Art by Mike Slisinger and Vaughn Stewart.

Better stick to storage, not unicorns! Art by Mike Slisinger and Vaughn Stewart.

Welcome to Engineers Unplugged, where technologists talk to each other the way they know best, with a whiteboard. The rules are simple:

  1. Episodes will publish weekly (or as close to it as we can manage)
  2. Subscribe to the podcast here: engineersunplugged.com
  3. Follow the #engineersunplugged conversation on Twitter
  4. Submit ideas for episodes or volunteer to appear by Tweeting to @CommsNinja
  5. Practice drawing unicorns

How far up the unicorn scale is your data center in regard to software defined storage? Post a comment below!

Thanks for your viewership and support of Engineers Unplugged. We’ll be on site at VMworld Barcelona, camera and whiteboard markers in hand. If you’ve got show ideas or questions, tweet me @CommsNinja.

Tags: , , , , , , , ,

The Programmable Network: IP and Optical Convergence

Someday soon, personal sensors, wearable gadgets, and embedded devices and services may make today’s PCs, laptops, tablets, and smartphones look quaint by comparison. But as the Internet of Everything­ (IoE) ─ with its diverse array of devices accessing a plethora of existing and new services ─ continues to rapidly evolve, user friendly interfaces mask growing complexity within networks. An article on today’s digital designers in the September 2013 issue of Wired captured how the focus is now “creating not products or interfaces but experiences, a million invisible transactions” and that “even as our devices have individually gotten simpler, the cumulative complexity of all of them is increasing.”

Which inevitably takes us behind the curtain to the exciting challenge of building hyper-efficient programmable networks using virtualization, the cloud, Software Defined Networking (SDN), and other technologies, architectures, and standards.

So far, this blog series on The Programmable Network has described various new and exciting capabilities leading to greater efficiencies and cost benefits. We’ve shared with you how you can now:

  • Visualize and control traffic using path computation via a network controller
  • Monitor and optimize traffic flows across network connections
  • Order services through an easy-to-use online portal which then launches automated service creation tasks

These capabilities are all Read More »

Tags: , , , , ,

Why virtualizing the Network is not the same as virtualizing the Server?

VMware launched NSX, its Network Virtualization platform at VMworld last week. In his keynote, VMware CEO Pat Gelsinger portrayed Network Virtualization as a very natural extension to what VMware accomplished in Server Virtualization. However market fundamentals and early drivers for Server Virtualization are not quite the same as Network Virtualization. Hence any comparison and contrast between the two should be understood and weighed on in their respective contexts.

The drive for Server Virtualization fundamentally was an attempt to address the growing gulf between faster rate of technology advancement in server space relative to customer ability to utilize the excess capacity. It was a trend that was driven by the focus towards gaining efficiency in an era where cost was becoming important. Over nearly a decade now Server Virtualization has accomplished this goal of better utilization of assets:  And server utilization levels have increased by a factor of 4 over the years.

DC_tech

Networks in the data centers today however do not suffer from this excess capacity problem. If any, the problem is the reverse – user demand for networks capacity continues to outpace what is currently available. As long as there remains a growing gulf between user expectations for capacity relative to technology advancement there will remain opportunity for vendors to innovate in this space. In other words unlike the server world, network virtualization does not shift the value away from the underlying infrastructure.

Server Virtualization is transforming IT by providing greater business agility. Goal of Network Virtualization should be to bring similar business agility for the network. However, this goal need not require complete decoupling of the virtual network from underlying physical network as some vendors may lead you to believe. Any goal of gaining agility by completely decoupling physical and virtual network can only be done with some confidence, by significant under-provisioning of the physical network. For if the bandwidth is plenty the overlays have less dependency on understanding or integrating with the underlying infrastructure. This shortsighted approach, which focuses on business agility, but ignores business assurance, will increase the network capital expenditure and operating expense spend over time. Note that even in the server world where compute efficiency was attained, the benefit did not come at any capex or opex savings. Capex savings attained on server hardware was offset by increased cost of virtualization software. And we have seen opex continues to increase over the last decade.

As IT increasingly begins to take on a service centric view, more intelligence will be needed at the edge – physical or virtual edge.  Cisco’s launch of Dynamic Fabric Automation (DFA) last July, address this view of an optimized fabric infrastructure with a more intelligent network edge that can enable any network anywhere, supporting transparent mobility for physical servers and virtual machines. Application Centric Infrastructure (ACI) takes this a step further by enabling application-driven policy automation, management and visibility of physical and virtual networks. They however also integrate the physical and the virtual network for an agile service delivery that also assures full lifecycle user experience.

 You may want also to read on this topic

Dynamic Fabric Automation : http://www.cisco.com/en/US/solutions/ns340/ns517/ns224/ns945/dynamic_fabric_automation.html

Shashi Kiran’s blog :  The Next Paradigm Shift: Application-Centric Infrastructure (ACI) gets ready to rumble 

Padmasree Warrior’s blog : Limitations of a Software-Only Approach to Data Center Networking 

Tags: , , , , , ,