Cisco Blogs


Cisco Blog > Data Center and Cloud

Cisco UCS Servers and Blade Server Evolution, part 1

Arguably the place to begin a Cisco UCS blade server journey would be with “Why Blade Servers”.  ‘Blades’ are cool.  There was “Blade Runner” (a cult classic) and the Wesley Snipes “Blade” movies, several TV series with ‘blade’ in the name, on and on; but for data centers and servers?  Why blades?  Where is the Blade Server TCO & ROI benefit that drives business decisions and therefore innovation and how do blade servers / chassis get there?

Before:

Blade servers have been around since about year 2000 and arguably came about as a way to make data center footprints smaller and reduce power consumption (reduced TCO).  Nothing new here for blade enthusiasts.  Rack servers were taking up more and more space and power in data centers.  The concept of blades was brilliant, insightful and simple. Take as many common rack delivered functionalities (services) as possible, and package them together for delivery to a fixed group of servers.  The easy targets for this were server power, cooling, and I/O (well, some I/O functions).  To look at it another way, a blade chassis takes a data center rack with servers, I/O cables and switches, then shrinks them into a ‘building block unit’.  Once you have the ‘unit’, put a single sheet metal wrapper around everything and voila, a blade chassis.  Overly simplistic I know, but a close enough visual.  If you want a step-by-step evolution, Sean McGee (a colleague of mine here at Cisco) did a darn good overview The “Mini-Rack” Approach To Blade Server Design.

Blade servers were a tremendous break-through with great thought leadership by any number of individuals and companies.  Unfortunately, that is where traditional blade chassis design stalled.   What has been needed to move the TCO/ROI needle is the next iterative step.  Why iterative?  Remember I said above that only ‘some of the I/O functions’ were an easy target and that a limitation was ‘delivery to a fixed group of servers’ in a single wrapper.  So, back to ‘why iterative’, which should take us to a next developmental step at the minimum, or better yet an evolutionary leap in design and functionality delivery.

Predictions:

Perusing the internet I found a Gartner posting from 2008 discussing blade servers and predicting technology changes.  The quote below is from that posting:

Gartner Newsroom:  “…Chassis Interconnections — Today the chassis represents an infrastructure boundary. However, in three to five years time, chassis interconnects will enable resources to be shared. Although Gartner expects this innovation to be introduced as a blade market capability, it will eventually become a valuable direction for other server form factors as well…” STAMFORD, Conn., July 31, 2008; http://www.gartner.com/it/page.jsp?id=735112.

Looking at the quote above with my 20/20 hindsight, I can’t help wondering where Gartner found their crystal ball.   What Gartner predicted happening in 3 to 5 years, Cisco delivered 11 months later (June 2009) and enabled unified management at the same time.  That’s a pretty bold claim which calls for some supporting information.

How to get there:

What can scale that really has not already been done with legacy blades chassis? I pointed out two places where the current (legacy) blade chassis stop short –

  1. Only part of I/O functions were extracted to the chassis layer, and
  2. They only provide service to the fixed number of servers in each chassis, no true scaling.

To move the bar for blade chassis, we better solve those problems right off the bat.  So how do you solve the I/O problem when every server needs I/O?  Obviously you can’t eliminate it.  So Cisco decided to unify it, eliminating the need for multiple different intra-chassis switches.  Unify means a single I/O type (protocol or ‘wrapper’), which makes design less complex and scale much easier, both of which are very good things.  Next is the issue of only delivering partial services (power, cooling, management and I/O) to a fixed group of servers.   The answer is easy.  Deliver everything at scale: servers and I/O and blade chassis and management etc.  Wait a minute.  Didn’t legacy chassis already try that?  The real question is “How to scale AT scale?”

The real answer is to solve both scaling challenges (I/O complexity and management) with a single solution.  Deliver a new design, rather than retreading an old dead end chassis ‘building block’ design.

Arrival:

The requisite new design is what Cisco delivered. Cisco UCS is a variable chassis count, variable blade server count, variable I/O capacity, smart, scaling architecture.  This “By The Bell” article has a good ‘outside looking in’ perspective and here is a Cisco site that provides a fast architecture overview -- Unified Computing System (Cisco UCS) technology.

Is there more to this story?  Of course.  We need to illustrate how the real answer is delivered by Cisco UCS blade servers.  How does blade server architecture play out at scale?  What do the various blade infrastructures look like when adding additional compute capacity?  How does blade chassis infrastructure design impact complexity?  How does Cisco UCS scale differently from legacy blade server design?

If you are still with me and you are interested in a good read, I suggest this report from Forrester – The Total Economic Impact of Cisco UCS.    No, it is not “homework” for Part 2, but it couldn’t hurt.  :-)

Coming Next:  Part 2 -- Cisco UCS Servers, Blade Server Chassis and TCO.

Tags: , , , , , , , , ,

In an effort to keep conversations fresh, Cisco Blogs closes comments after 60 days. Please visit the Cisco Blogs hub page for the latest content.

2 Comments.


  1. Very useful Thomas – thanks! looking forward to part 2 :-)

       0 likes

    • Thomas Cloyd

      Glad you liked it Mark. First of this coming week I’ll be posting Part 2. I am planning another 3 or 4 parts over the coming weeks discussing UCS and comparitive design and TCO. Hope you will find them useful as well.

         0 likes