Cisco Logo


Data Center and Cloud

The Nexus 3548 with Algo Boost was announced last week and received a lot of positive buzz around this game changing innovation. To follow up on Berna Devrim’s Introduction Blog, I am introducing a multipart series that goes into more specifics by Cisco experts. As part 1 of the series, I recently had the opportunity to have a chat with Will Ochandarena about the latency enhancements. Will is a Senior Product Manager in the Server Access and Virtualization Business Unit.  In this role, he is responsible for the Nexus 3548 switch, and Cisco’s low latency switching strategy.

GD: The Cisco Nexus 3548 switch with Algo Boost was announced on September 19th and received a lot of positive attention. Can you elaborate a little more on the latency that this switch can achieve? How does this benefit our financial customers?

<WO>: The custom switching ASIC in the Nexus 3548, codenamed Monticello, sets a new bar for switching latency.  Our engineers worked tirelessly to eliminate unnecessary nanoseconds from the forwarding path, tweaking it down to as low as 190 nanoseconds (ns).  Best of all,  this latency is achieved even when we are doing full layer-2 and layer-3 switching, with features such as Network Address Translation (NAT) enabled.   We actually went as far as to offer a few different switching modes, each with different latency and forwarding characteristics, in order to give our customers the most flexibility in their deployments.

In terms of the impact on our end customers, we consistently hear from companies in the financial community that switch latency has a direct impact on the profitability of their business. Trading firms -- as well as the exchanges and other participants -- gain significant business advantage if the supporting infrastructure enables them to acquire data and execute trades nanoseconds faster than the competition.

GD: Can you tell me some more about the different latency modes? Are they configurable?

<WO>:  Sure, like I mentioned earlier, the Nexus 3548 has three different forwarding modes with different latency and forwarding characteristics.  Here is a run-down of these different modes:

The default operating mode of the Nexus 3548 is what we call Normal Mode.  In this mode, the Nexus 3548 is extremely feature rich and scalable, while forwarding packets at ultra low latencies of about 250 nanoseconds.

For those customers where latency is everything, the device can be globally configured for Warp Mode.  In this mode, the latency is slashed to 190ns, and the main difference is the table sizes of the device are reduced to a level that is still higher than the typical configuration mode for financial trading applications, with 4000 unicast routes, 8000 multicast routes, and 8000 hosts.

No matter what the global mode of the Nexus 3548 is, Normal or Warp, customers can take advantage of an innovative feature called Warp SPAN.  In this mode, all traffic entering the switch on a single 10G port (port 36 to be exact) is copied to a configurable set of egress ports at a never before seen latency of 50ns or less.  Best of all, the traffic entering port 36 of the switch still passes through the normal forwarding pipeline, so this interface can be used to establish peering adjacencies with upstream providers. This feature is ideal for the efficient delivery of multicast stock market data to a set of servers commonly referred to as “feed handlers”.

GD: Can you give a little more details on Warp SPAN? 50-nanosecond latency is insane!

<WO>: It is insane, and it is features like this that remind us why we invest in our own silicon.  Typically, multicast market data enters a network switch at the customer edge and passes through forwarding logic that decides which streams need to go to which downstream feed handler servers.  However, we heard from customers that in some small colocation environments the feed handler servers want to receive all multicast feeds coming from the provider.  This means the typical routing and filtering logic the traffic passed through was overkill.

Basically, the idea behind Warp SPAN was to shortcut traffic from the ingress port directly to the egress ports specified by the network administrator, bypassing the routing and filtering logic.  This is what saved us about 140ns of latency compared to our lowest latency switching mode.  Like I said earlier, this traffic is also sent into the normal forwarding pipeline, so it can be routed normally to a different set of egress interfaces, or to the switch CPU to establish routing protocol adjacencies.

GD: With all the buzz generated last week, customers and partners are eager to purchase and deploy. When will the Nexus 3548 be orderable and shipping?

<WO>:  The Nexus 3548 is orderable today.  We expect shipping to begin sometime in November 2012.

I’d like to thank Will for this valuable information. To see more info on the Nexus 3548, visit

In an effort to keep conversations fresh, Cisco Blogs closes comments after 90 days. Please visit the Cisco Blogs hub page for the latest content.

1 Comments.


  1. Dr. Jose A. Wong - Perez

    …Astounding accomplishment in latency reduction with the routing and filtering overkill…just nice and easy move…Thanks for sharing such spectacular accomplishment!…my respects and regards from Puerto Rico…

       0 likes

  1. Return to Countries/Regions
  2. Return to Home
  1. All Data Center and Cloud
  2. All Security
  3. Return to Home