Cisco Logo


Data Center and Cloud

Cisco UCS Servers and Blade Server Evolution, part 1, as the title suggests, discussed blade server evolution and why Cisco UCS is a game changer.  Now let’s talk about what the implications are for blade server TCO (Total Cost of Ownership) and how Cisco Unified Computing System scales vs. legacy blade architectures.

Blade Server TCO and Scale

Scale is the crux of the problem that has historically been the barrier for blade servers to deliver on their initial promise.  Scale for I/O.  Scale for Servers.  Scale for Management.  Cisco identified these shortfalls in the traditional legacy blade architecture and came to the marketplace with an innovative, game changing redefined architecture – Cisco UCS.

As discussed in “part 1”, to move the bar for blade chassis, we to better consolidate I/O, management and scale.  Enter Cisco UCS.  Deliver everything at scale: servers and I/O and blade chassis and management etc.  Deliver a new design, rather than retreading an old dead end chassis ‘building block’ design.

Efficiency and Scale by Design

The requisite new design is what Cisco delivered. Cisco UCS is a variable chassis count, variable server count, variable I/O capacity, smart scaling architecture.

Figure 1 is the Cisco design, a converged I/O (FCoE – lossless FC and Enet combined) that scales.  It provides easy, efficient infrastructure scaling across:  multiple chassis, multiple servers, racks, rows and yes, it even includes the integration of rack servers into the solution.

Figure 1: Cisco UCS architecture – 10 x 8 blade chassis = 80 blade servers, 20 cables (add more I/O by simply adding cables – easy scaling)

Figure 2 is a Non-Converged legacy blade chassis I/O architecture. More = more… of everything.  More chassis to hold more blades is OK, that makes sense.  But more Switches?  More cables?  More points of Management?  More complexity?  Not too good.

Figure 2: Non-Converged chassis – 5 x 16 blade chassis = 80 blade servers, 50 cables

Figure 3 is what some have delivered as a “new converged solution”.  This resides in a legacy blade chassis and does convergence, kind of…. but in the chassis only and still has the mare’s nest of cables as you can see.  It has separate northbound network types leaving each chassis just like the non-converged chassis above; and, it has the same cable count as the ‘old siloed chassis networking’ design in Figure 2.  A few less switches and a few less management point, yes.  But no true scaling and still stuck with adding more of the same with every building block.  Repeated Complexity with No Efficiency = No Scaling.

Figure 3: New ‘not quite converged’ (legacy) chassis – 5 x 16 chassis = 80 blade servers, 50 cables

The legacy chassis (figures 2 & 3) just don’t work well at scale.  Each chassis is identical, the building block mentality.   Identical building blocks are outstanding, if you are in the construction business.  Everything is, well, identical.  You can design buildings of various sizes simply through repetition.  In this instance, in the data center, identical building blocks are not so good.  The promise of efficiency at scale just does not exist.  Scale with limitations exists since you repeat everything:  all networking and management points, over and over and over again, increasing complexity.  This doesn’t really deliver much in the way of efficiency for the user.  Call it “accidental architecture” with no real scalability and definitely no real efficiency, and no business advantage for the data center.  So how is Cisco UCS any different?  That’s a fair question.

Each legacy chassis has either 6 network management touch points (figure 2 – four chassis switches + two chassis mgmt) or 4 management points (figure 3 – two chassis switches + 2 chassis mgmt).  This means you have either 30 or 20 individual network management touch points per 80 blades (respectively).   Compare that with a single management touch point for the 80 Cisco UCS servers shown here, Cisco UCS Manager.  The UCS illustration delivers the same network capability:  10GEnet, FC – using FCoE, and dedicated Out of Band management, as illustration 2 & 3.

Simplicity and Ease of Scale Built-in

Ok, so these are interesting pictures but what do they mean?  What is the real answer?  Now let’s talk about Cisco (figure 1).  The Cisco Unified Computing System uses two switches  that have a single management point because they are clustered; and, a single pair of Fabric Interconnects scale to multiple chassis (Fabric Interconnects – remember Gartner talking above about chassis interconnects in Part 1?).  Gartner’s crystal ball again.  Oh, and before I forget, Cisco UCS actually does include the ability to integrate rack servers into the solution -- once again Gartner’s crystal ball is spot on.  (Check my Cisco UCS… Part 1 Blog for the Garter quote and link)

With Cisco UCS, if you need more servers just add another chassis to your rack and connect it to your existing Fabric Interconnects -- the new UCS 6248UP Fabric Interconnect has 48 universal ports.  Or, if you need a rack server, as indicated above you can also connect it into your UCS environment.  Need more I/O from a chassis?  Plug in additional cables and get to 20Gbps / blade, and even more.  Want more?  If you have various chassis, with different I/O requirements, it is no problem for UCS.  There is no requirement that each chassis be “identically configured”.  Cisco UCS is a variable chassis count, variable server count, variable I/O capacity, smart, scaling architecture delivering true Business Advantages.  The end game for Cisco UCS is that within a single UCS environment, you can deploy various servers, with differing system resources, servicing different applications with different requirements = any workload, any server, in a single flexible dynamic system.

So.  Still with me?  Thought I’d better check.  Good news is that there is no homework for Part 3.  ………..….. Oh alright, if you insist.  Have some fun.  The Real Story About the UCS Automobile.  It’s a little old but still entertaining.

Is there more to this story?  Of course.  Now that we have looked at architectural differences we need to look  at how it all plays out for TCO.  What is the infrastructure impact to TCO?  How do the different blade chassis infrastructure designs impact TCO as the environment scales?  My next blog will look at the real answer for TCO with different blade chassis architectures.

Coming Next:  Part 3 – Cisco UCS, Blade Server TCO and Blade Chassis Infrastructure

Comments Are Closed

  1. Return to Countries/Regions
  2. Return to Home
  1. All Data Center and Cloud
  2. Return to Home