Cisco Blogs


Cisco Blog > Cisco Interaction Network

Blink: You’re Too Slow

December 11, 2012 at 8:04 am PST

When playing in the high speed switching game -- timing is everything.  Timing ‘sets the pace’ for visibility to established the ‘where and when,’ correlation across a broad computing environment plus compliance and digital forensics with precision time stamps.  Every element of the data center requires accurate timing at a level that leaves no room for error.

Speed is the other, more celebrated, if not obvious requirement, for the high speed switching game.  Speed that is measured in increments requiring some new additions to my vocabulary.

When looking at the ways in which we measure speed and regulate time throughout the network, I was of course familiar with NTP or Network Time Protocol.   NTP provides millisecond timing…which, crazy enough…is WAY TOO SLOW for this high speed market.   Now being from the South, I may blink a little slower than other people but I read that the average time it takes to blink an eye…is 300 to 400 milliseconds!  A millisecond is a thousandth of a second.  That is considered slow?

Turns out ‘micro-second’ level detail is our next consideration.  A microsecond is equal to one millionth (10−6 or 1/1,000,000) of a second. One microsecond is to one second as one second is to 11.54 days. To keep our blinking example alive: 350,000 microseconds.  Still too slow.

Next unit of measure?  The Nanosecond. A nanosecond is one billionth of a second.  One nanosecond is to one second as one second is to 31.7 years.  Time to blink is just silly at this point.

At one point in time I used to think higher speeds were attainable with higher degrees of bandwidth.  This may be why the idea of ‘low latency’ seems so counter-intuitive. As you hopefully understand at this point, there are limitations to how fast data can move and that real gains in this area can only be achieved through gains in efficiency -- in other words, the elimination (as much as possible) of latency.

For ethernet, speed really is about latency.  Ethernet switch latency is defined as the time it takes for a switch to forward a packet from its ingress port to its egress port. The lower the latency, the faster the device can transmit packets to its final destination.  Also important within this ‘need for speed’ is avoiding packet loss. The magic is in within the balancing act: speed and accuracy that challenge our understanding of traditional physics.

Cisco’s latest entrant to the world of high speed trading brings us the Nexus 3548.  A slim 48 port line rate switch with latency as low as 190 nanoseconds. It includes a Warp switch port analyzer (SPAN) feature that facilitates the efficient delivery of stock market data to financial trading servers in as littles as 50 nanoseconds and multiple other tweaks we uncover in this 1 hour deep dive into the fastest switch on the market. The first new member of the 2nd generation Nexus 3000 family.   (We featured the first generation Nexus 3000 series in April 2011)

This is a great show -- it moves fast!

Segment Outline:

  •  - Robb & Jimmy Ray with Keys to the Show
  •  - Berna Devrim introduces us to Cisco Algo Boost and the Nexus 3548
  •  - Will Ochandarena gives us a hardware show and tell
  •  - Jacob Rapp walks us through a few live simulations
  •  - Chih-Tsung, ASIC designer walks us through the custom silicon

 

Further Reading:

- Nexus 3548 Press Release

 

Cisco Blogs

Jacob Rapp:  Benchmarking at Ultra-Low Latency

Gabriel Dixon: The Algo Boost Series

Dave Malik: Cisco Innovation provides Competitive Advantage

 

 

 

 

 

Tags: , , , , , , ,

Benchmarking at Ultra-Low Latency

Since we started shipping the Nexus 3548 with AlgoBoost to our customers in the beginning of November, there has been more and more interest in testing and verifying the switch’s latency in different traffic scenarios. What we have found so far is while network engineers might be well experienced in testing the throughput capabilities of a switch, verifying the latency can be challenging, especially when latency is measured in the tens and low hundreds of nanoseconds!
I discussed this topic briefly when doing a hands-on demo for TechWise TV a short time ago.

The goal of this post is to give an overview of the most common latency tests, how the Nexus 3548 performs in those tests, and to detail some subtleties of low latency testing for multicast traffic. This post will also address some confusion we’ve heard some vendors try to emphasize with the two source multicast tests.

Unicast Traffic

The most common test case is to verify throughput and latency when sending unicast traffic. RFC 2544 provides a standard for this test case. The most stressful version of the RFC 2544 test uses 64-byte packets in a full mesh, at 100 percent line rate. Full mesh means that all ports send traffic at the configured rate to all other ports in a full mesh pattern.

Figure 1 – Full Mesh traffic pattern

The following graph shows the Nexus 3548 latency results for Layer 3 RFC 2544 full mesh unicast test, with the Nexus 3548 operating in warp mode.

Figure 2 -- Layer 3 RFC 2544 full mesh unicast test

We can see that the Nexus 3548 consistently forwards packets of all sizes under 200 nanoseconds at 50% load, and less than 240 nanoseconds at 100% load.

Read More »

Tags: , , , , , , ,