Cisco Logo


Data Center and Cloud

You know, it’s time I get back to my geeky roots here some and talk about tings that I find very exciting- backplanes, fabrics, forwarding engines…. BANDWiDTH! It’s one of the many reasons I love blogging, because we can talk about things that matter in IT that most don’t care about: Big Pipes!On the Nexus 7000 we use numbers like 15 Terabits, 230Gb per slot, and so one… let’s take one of these and extrapolate HOW we get there. I think this may help in drawing comparisons to other architectures out there.230Gb per slot, and how we get there…From the I/O module to the Fabric Module there is a series of thick copper traces and high density connectors. Each pair of copper traces is today clocked at 3.125Gbps. Each fabric channel is comprised of 16 pairs of copper for 16x3.125Gbps or 50Gbps half-duplex and 25Gbps of full-duplex bandwidth per fabric channel. (each pair of Cu at 3.125Gbps is unidirectional) We have to encode on the wire and we use a 24b/26b encoding scheme that yields just north of 23Gbps of real-world bandwidth per fabric channel.Each switch fabric chip has 26 fabric channels. 2 fabric channels connect from each switch fabric ASIC to each line card in the 10-slot chassis. 2 Fabric channels per slot @ 23Gb each is 46Gb per slot per Fabric Module.The Nexus 7000 can sustain up to five fabric modules. So with five switch fabric modules @ 46Gbps each we have 230Gb per slot. Now I do want to clarify that this is 230Gb IN and 230Gb OUT concurrently from every slot in the system when five fabric modules are present. Bandwidth scales up and down linearly with the addition or removal of fabric modules. You need a minimum of one, and max of five.How do we assure we use all that bandwidth effectively?Firstly the Nexus 7000 has a virtual output queue system with fabric arbitration, or for short and Arbitrated VOQ system. VOQ is where we have a queue on each ingress I/O module for every egress port on the system. We have 1024 queues on every I/O module, one for each potential 10GbE interface. Packets go through L2 and L3 lookup on the I/O modules forwarding engine with then replies with a Fabric Port of Exit header. This FPOE header is used by the fab ric to forward a packet to the right destination. It is also used to to identify which VOQ to put the packet into.If we have lots of 64b frames in the VOQ we can concatenate them so we only put one FPOE header on a larger block of 64b frames. If we have jumbo frames we segment them into smaller sizes that are more optimal for the fabric and deliver more deterministic latency, especially under load. Each VOQ can only send data across the fabric when the receiving I/O module has the capability of receiving the data and serializing it onto the egress interface. This is extremely beneficial as it pushes all congestion to the source, aligns all of the larger buffering into one place, and is one of the key technical attributed that allows us to ensure a lossless fabric architecture. The fabric arbiter also ensures that we use all fabric modules bandwidth as efficiently as possible. 15Tb? Well if you have been building a spreadsheet or following along with a calculator just know that the same system applies to the 18-slot chassis with 16 I/O module slots. And that the 3.125Gbps number we discussed per pair of Cu unidirectionally? Well we know for certain we can do over 2x that today, if not even higher…dg

In an effort to keep conversations fresh, Cisco Blogs closes comments after 90 days. Please visit the Cisco Blogs hub page for the latest content.

1 Comments.


  1. Is this the right math? 15 Tbps = 230 Gbps per slot out +230 Gbps per slot in —- 460 x 16 slots in an 18-slot chassis —- 7360 Gbps X 2 future speed enhancements —- ~15000 Gbps = 15 Tbps ???

       0 likes

  1. Return to Countries/Regions
  2. Return to Home
  1. All Data Center and Cloud
  2. All Security
  3. Return to Home