Cisco Logo


Data Center and Cloud

I just finished making a few slides comparing switching architectures in the data center on the LAN side. (as much as I love the SAN side let’s leave it out of this discourse for a few minutes). The architectures we deploy when aggregating switches or when building a DC Core network seem pretty consistent. Most customers use a modular and highly available platform for these functions (I think, feel free to disagree).However in the server access layer there are three or three-and-a-half predominant designs I see within the data center architecture.1) When aggregating servers larger than 1RU or servers with a mixed amount of interface types and densities I often see an end-of-row design employed with a larger modular switch (Cat6500) or two deployed there to aggregate the servers. (Do you usually use one or two switches to aggregate them???)2) When I have 1RU servers stacked 40 high in a rack one or two 1RU Rack Switches (like the Catalyst 4948-10G) is often used to aggregate all of these servers with Gigabit Ethernet and then run a couple 10GbE links back to the aggregation switches.3) When using blades most customers seem to deploy blade switches. These are suualyl home-run back into the agregation tier since it is quite hard to fit more than 2-3 Blade server chassis into a rack because of thermal constraints.3.5) Had to have my half… :) I have seen one other design deployed in larger data centers using the pass-thru module or sometimes even the blade-switch where it is aggregated into a series of rack switches. The main reason is so that the rack is a compute entity of its own accord, can be rolled in, wheels locked, powered up, and two-four fibers plugged in and its online. It really optimizes for ease-of-deployment in large-scale facilities. But at a cost trade-off.Some questions I have-a) How will 10GbE Servers be connected into the network? Straight into the Aggregation switches? End of Row Switches?b) Are there other designs and architectures that you commonly see used?Thanks!dg

In an effort to keep conversations fresh, Cisco Blogs closes comments after 90 days. Please visit the Cisco Blogs hub page for the latest content.

3 Comments.


  1. Simon Hamilton

    I’ve used both in the few DC’s I’ve built out in the past year – a pair of central 6500′s is great for a permanent / in-house DC setup, or one where the customer has a colo cage. Where individual racks are at a colo then switches at the top of the racks becomes sensible with fiber in between. 4948-10GE are way too expensive unfortunately for top of rack – I’ve been using a pair of other vendor switches which are 48 way gig + 2 X 10gig and cost less than a 3750, plus a 3rd unmanaged switch strictly for iLOs and SNMP power strips etc. These are then aggregated onto another vendors 10gig switches, which finally go to the core 6500′s – which then only need a single 10 gig card each. This setup works great for iSCSI, though I confess I have only seen the 10 gig links only peak at 3.5 gig as yet. I look forward to when Cisco reduce their 10 gig pricing to the point I can sell it successfully – 10 gig iSCSI HBAs are showing up now, and they and blade server chassis will be the driver for access layer 10 gig.

       0 likes

  2. Bill Dufresne

    I have this discussion with most customers when dealing with Greenfield situations. Often it comes down to a few important items:1). What is the communication path between servers? (Are common services located within a row or disbursed to guard against power issues?)2). What is the cost of cabling runs within a row vs. within a rack? (Is 10GE connectivity to servers in any plan for the next 5 years? – Also how does in-row cabling impact cooling?)3). How does cabling for Ethernet differ from SAN? (Most often customers who opt for a structured cabling environment will use the switch on row end due to the fact that Ethernet Data, Ethernet ILO or Management, and SAN can all use a similar structure.)I have seen customers create a 2.5 (need my half too) where MDS 92xx or 91xx are used in a top of rack design and as such, fiber is the only option run out of a rack. Thus within aggregation, high-speed and low latency really show the capabilities of the MDS and 6500 for GEC, 10GE, 10GEC, and 4 and 10Gbps FC.Thanks,

       0 likes

  3. I also want to add half of a design :-)You know what they say about the Dutch (w’re cheap) and that goes for some of the designs as well. I like both designs 1 and 2 (end of rack or top of rack). Both have advantages, with top of rack making it a lot easier for cabling compared to end-of-rack, _IF_ you stick to the design rules.What I see sometimes (and thus my half) is that people like the idea of top-of-rack, then continue to brainstorm around that idea… — they then need dual top of rack because servers have dual production uplinks– they then don’t have 40 servers in a rack but only 10 or 15 multiple RU servers… and then they say hmm having 2 switching in the top of rack is not very efficient port usage”". So they start to look at using ports for servers in neighboring racks to get port density up. That leads to 3 racks using the top-of-rack servers in the middel rack.And that is exactly what the END-OF-RACK design was for. Because in the process of thinking up this solution they forget about one really important thing. Cabling is really neat with End-of-Rack or Top-of-Rack designs. But how are you going to connect servers across racks into the top-of-rack switch in the middel rack. You know that that is going to end with cables going between cabinets, and that will never be the correct length. It is a solution that from an operations point of view will be messy.Point I’m trying to make – when thinking about these designs, don’t forget to think about the people that have to operate them. Think about the cabling/patching work. Both end-of-rack and top-of-rack take that into consideration. But don’t create a variation of the designs (don’t be cheap – you’ll pay the price later)TJ”

       0 likes

  1. Return to Countries/Regions
  2. Return to Home
  1. All Data Center and Cloud
  2. Return to Home