Cisco Blogs


Cisco Blog > Data Center

Flipping the 70-30 IT OPEX Model On Its Head

In today’s always on and connected, Internet of Things environments, IT is taking center stage to optimize business operations.

Many IT organizations allocate 70% or more of their budgets to simply keeping their operations up and running. It’s been like this for a long time, and there are many good arguments for this approach. After all, when IT stops working, business grinds to a halt.  The penalty for failing to keeping things moving is often swift and unpleasant.

ScottClarkDCBlogMarch2015quoteAt the same time, most CIOs will admit that they are increasingly being pulled in several directions at once.  In addition to keeping the uptime ball in the air, as a CIO you must juggle an accelerating onslaught of new demands.  Like the push towards video, mobile, data analytics, and cloud.  Plus the exponential increase in Internet traffic that means your networks never seem to be fast enough.  And no IT executive wants to see their organization on the evening news in connection with the latest data security breach.

As a CIO, you should be asking your IT vendors how they can help reverse the 70-30 ratio in your shop without downgrading its performance.  How do you transition to spending less on day-to-day operations?  And, what’s the best way to direct a bigger share of IT resources toward addressing the expanding needs of the internal lines of business with more innovative solutions? Read More »

Tags: , , ,

#EngineersUnplugged S7|E3 The IOPS Don’t Lie!

In this episode, Matt Brender (@mjbrender) and Rick Vanover (@RickVanover) give us an overview of why you should let your IOPS tell the truth!  Ever wonder if your expectations for the modern Data Center match the reality?  Matt and Rick go through the math and thinking for what your storage systems can and cannot do!

Special Note: For fun we’ve got a poll on your favorite unicorn… after the jump!

Read More »

Tags: , , , ,

Cisco Keynote – End-to-End Optimization for Today’s Modern Datacenters

Have a bit of free time this Wednesday morning? If so please feel free to sit in on a Cisco keynote delivered by Mark Balch, Director of Cisco UCS Product Management, as he outlines the challenges faced and the discoveries made with the UCS family and how it has driven revolutionary change and business benefits for today’s modern datacenter.

The Cisco keynote starts WindowsITPro’s “virtual trade show” on Optimizing Your Virtual Infrastructure.  The event brings top industry Microsoft experts together in an online forum affording attendees the opportunity to learn about key datacenter optimization topics and trends.

Our UCS family has been a leader in Data Center optimization since it’s initial release to market five years ago.  Having been designed for virtualization from the beginning, UCS is an integrated system that is configured through unified, model-based management to simplify deployment of enterprise-class applications and services running in bare-metal, virtualized, and cloud-computing environments.

UCSPoster

Download the UCS Family poster

Read More »

Tags: , , , , , , , ,

Hard Choices !

Sorry .. I did not mean to steal the title of Hillary Clinton’s book. It so happened that we had to deal with “hard choices” of our own,  when we had to decide on the management approach to our new M-Series platform. In the first blog of the UCS M-Series Modular Servers journey series, Arnab briefly alluded to the value our customers placed on UCS Manager.As we started to have more customer conversations, we recognized a clear demarcation when it came to infrastructure management. There was a group of customers who just would not take any offering from us that is not managed by UCS Manager. On the other hand, a few customers who had built their own management framework were more enamored by the disaggregated server offering that we intended to build. For the second set of customers, there was a strong perception that UCS Manager did not add much value to their operations. We were faced with a very difficult choice of whether to release the platform with UCS Manager or provide standalone management. After multiple rounds of discussions, we made a conscious decision to launch M-Series as a UCS Manager managed platform only.  Ironically enough, it was one such customer discussion that vindicated our decision. This happened to be a customer deploying large cloud scale applications and did not care much UCS Manager. During the conversation, they talked about some BIOS issues in their super large web farm that surfaced couple of years back. After almost 2 years, they were still rolling out the BIOS updates !

UCS Manager is the industry’s first tool to elegantly break down the operational silos in the datacenter by introducing a policy-based management of disparate infrastructure elements in the datacenter. This was made possible by the concept of Service Profiles, which made it easy for the rapid adoption of converged infrastructure. Service Profiles allowed the abstraction of all elements associated with a server’s identity and rendering the underlying servers pretty much stateless. This enabled rapid server re-purposing and workload mobility as well as made it easy for enforcing operational policies like firmware updates.  And, the whole offering has been built on the foundation of XML APIs, which makes it extremely easy to integrate with other datacenter management, automation and orchestration tools. You can learn more about UCS Manager by clicking here.

UCS M-Series Modular Servers are the latest addition to the infrastructure that can be managed by UCS Manager. M-Series is targeted at cloud-scale applications, which will be deployed in 1000s, if not 10s of 1000s of nodes. Automation of policy enforcement is more paramount than the traditional datacenter deployments. Managing groups of compute elements as a single entity, fault aggregation, BIOS updates and firmware upgrades are a few key features of UCS Manager that kept surfacing repeatedly during multiple customer conversations.  That was one of the primary drivers in our decision to release this platform with UCS Manager.

In the cloud-scale space, the need to almost instantaneously deploy lots of severs at a time is a critical requirement. Also, all of the nodes are pretty much deployed as identical compute elements. Standardization of configurations across all of the servers is very much needed.  UCS Manager makes it extremely easy to create the service profile templates ahead of time (making use of the UCS Manager emulator) and create any number of service profile clones literally at the push of a button. Associating the service profiles with the underlying infrastructure is also done with a couple of clicks. Net-Net: you rack, stack, and cable once; re-provision and re-deploy to meet your workload needs without having to make any physical changes to your infrastructure.

Storage Profiles is the most notable enhancement to UCS Manager in order to support M-series. This feature allows our customers to slice and dice the SSDs in the M-Series chassis into smaller virtual disks. Each of these virtual disks is then served up as if they are local PCIe devices to the server nodes within the compute cartridges plugged into the chassis. Steve has explained that concept elaborately in the previous blog. In the next edition, we will go into more details about Storage Profiles and other pertinent UCS Manager features for the M-Series.

Tags: , , , , ,

UCS M-Series System Link Technology: The converged infrastructure story.

It almost feels like this blog entry should start with: Once upon a time….   Because it captures a journey of a young emerging technology and the powerful infrastructure tool it has become. The Cisco UCS journey starts with the tale of Unified Fabric and the Converged Network Adapter (CNA).

Most people think of Unified Fabric as the ability to put both Fiber Channel and Ethernet on the same wire between the server and the Fabric Interconnect or upstream FCoE switchs.  That is part of the story, but that part is as simple as putting a Fiber Channel frame inside of an Ethernet frame.   What is the magic that makes this happen at the server level?  Doesn’t FCoE imply that the Operating System itself would have to know how to present a Fiber Channel device in software and then encapsulate and send the frame across the Ethernet port?   Possibly, but that would require OS FCoE software support which would also require CPU overhead and require end users to qualify these new software drivers and compare the performance of software against existing hardware FC HBAs.

For UCS the key to the success of converged infrastructure was due greatly to the very first Converged Network Adapters that were released.  These adapters presented existing PCIe Fiber Channel and Ethernet endpoints to the operating system.  This required no new drivers or new qualification from the perspective of the operating system and users.  However at the heart of this adapter was a Cisco ASIC that provided two key functions:

1.)  Present the physical functions for existing PCIe devices to the operating system without the penalty of PCIe switching.

2.)   Encapsulate Fiber Channel frames into an Ethernet frame as they are sent to the northbound switch.

Converged Network Adapter

Converged Network Adapter

It is the second function that we often focus on because that’s the cool networking portion that many of us at Cisco like to talk about.  But how exactly do we convince the operating system that it is communicating with an Intel Dual port Ethernet NIC and a Dual port 4GB Qlogic Fiber Channel HBA?  I mean these are the exact same drivers that we use for the actual Intel and Qlogic card, there’s got to be some magic there right?

Well, yes and no.  Lets start with the no.  Presenting different physical functions (PCIe endpoints) on a physical PCIe card is nothing new.  It’s as simple as putting a PCIe switch between the bus and the endpoints.  But like all switching technologies a PCIe switch incurs latency and it cannot encapsulate a FC frame into an Ethernet frame.  So that’s where the magic comes into play.  The original Converged Network Adapater contained a Cisco ASIC that sits on the PCIe bus between the Intel and Qlogic physical functions.  From the operating system perspective the ASIC “looks” like a PCIe switch providing direct access to the the Ethernet and Fiber Channel endpoints, but in reality it has the ability to move I/O in and out of the physical functions without incurring the latency of a switch.  The ASIC also provides a mechanism for encapsulating the FC Frames into a specific Ethernet frame type to provide FCoE connectivity upstream.

The pure beauty of this ASIC is that we have evolved it from the CNA to the Virtual Interface Card (VIC). These traditional CNAs have a limited number of Ethernet and FC ports available to they system (2 each) based on the chipsets installed on the card.  The Cisco VIC provides a variety of vNICs and vHBAs to be created on the card.  The VIC not only virtualizes the PCIe switch, it virtualizes the I/O endpoint.

Cisco Virtual Interface Card

Cisco Virtual Interface Card

So in essence what we have created with the Cisco ASIC, that drives the VIC, is a device that can provide a standard PCIe mechanism to present an end device directly to the operating system.  This ASIC also provides a hardware mechanism designed to receive native I/O from the operating system and encapsulate and translate where necessary without the need for OS stack dependencies, for example native Fiber Channel encapsulated into Ethernet.

At the heart of the UCS M-Series servers is the System Link Technology.  It is this specific component that provides access to the shared I/O resources in the chassis to the compute nodes.  System Link Technology is the 3rd Generation technology behind the VIC and the 4th Generation technology for Unified Fabric within the construct of Unified Computing.  The key function of the System Link Technology is the creation of a new PCIe physical function called the SCSI NIC (sNIC) that presents a virtual storage controller to the operating system and maps drive resources to a specific service profile within Cisco UCS.

System Link Technology

System Link Technology

It is this innovative technology that provides a mechanism for each compute node within UCS M-Series to have it’s own specific virtual drive carved out of the available physical drives within the chassis.  This is accomplished using standard PCIe and not MR-IOV.  Therefore it does not require any special knowledge of a change in the PCIe frame format by the operating system.

For a more detailed look at System Link Technology in the M-Series check out the following white paper.

The important thing to remember is that hardware infrastructure is only part of the overall architectural design for UCS M-Series. The other component that is key to UCS is the ability to manage the virtual instantiations of the system components.  In the next segment on UCS M-Series Mahesh will discuss how UCS Manager rounds out the architectural design.

Tags: , , , , ,