When playing in the high speed switching game -- timing is everything. Timing ‘sets the pace’ for visibility to established the ‘where and when,’ correlation across a broad computing environment plus compliance and digital forensics with precision time stamps. Every element of the data center requires accurate timing at a level that leaves no room for error.
Speed is the other, more celebrated, if not obvious requirement, for the high speed switching game. Speed that is measured in increments requiring some new additions to my vocabulary.
When looking at the ways in which we measure speed and regulate time throughout the network, I was of course familiar with NTP or Network Time Protocol. NTP provides millisecond timing…which, crazy enough…is WAY TOO SLOW for this high speed market. Now being from the South, I may blink a little slower than other people but I read that the average time it takes to blink an eye…is 300 to 400 milliseconds! A millisecond is a thousandth of a second. That is considered slow?
Turns out ‘micro-second’ level detail is our next consideration. A microsecond is equal to one millionth (10−6 or 1/1,000,000) of a second. One microsecond is to one second as one second is to 11.54 days. To keep our blinking example alive: 350,000 microseconds. Still too slow.
Next unit of measure? The Nanosecond. A nanosecond is one billionth of a second. One nanosecond is to one second as one second is to 31.7 years. Time to blink is just silly at this point.
At one point in time I used to think higher speeds were attainable with higher degrees of bandwidth. This may be why the idea of ‘low latency’ seems so counter-intuitive. As you hopefully understand at this point, there are limitations to how fast data can move and that real gains in this area can only be achieved through gains in efficiency -- in other words, the elimination (as much as possible) of latency.
For ethernet, speed really is about latency. Ethernet switch latency is defined as the time it takes for a switch to forward a packet from its ingress port to its egress port. The lower the latency, the faster the device can transmit packets to its final destination. Also important within this ‘need for speed’ is avoiding packet loss. The magic is in within the balancing act: speed and accuracy that challenge our understanding of traditional physics.
Cisco’s latest entrant to the world of high speed trading brings us the Nexus 3548. A slim 48 port line rate switch with latency as low as 190 nanoseconds. It includes a Warp switch port analyzer (SPAN) feature that facilitates the efficient delivery of stock market data to financial trading servers in as littles as 50 nanoseconds and multiple other tweaks we uncover in this 1 hour deep dive into the fastest switch on the market. The first new member of the 2nd generation Nexus 3000 family. (We featured the first generation Nexus 3000 series in April 2011)
This is a great show -- it moves fast!
- Robb & Jimmy Ray with Keys to the Show
- Berna Devrim introduces us to Cisco Algo Boost and the Nexus 3548
- Will Ochandarena gives us a hardware show and tell
- Jacob Rapp walks us through a few live simulations
- Chih-Tsung, ASIC designer walks us through the custom silicon
Today, at the High Performance Computing for Wall Street event, we announced Cisco Algorithm Boost or Algo Boost technology, a groundbreaking networking innovation with numerous patents pending, that offers the highest speed, visibility and monitoring capabilities in the networking industry. A true game changer delivering competitive advantage to our customers!
Ideal for high performance trading, big data and high performance computing environments, this new technology offers network access performance as low as 190 nanoseconds, more than 60% faster than other full featured Ethernet switches. When your business success is determined by nanoseconds, this is a huge gain!
The first switch to integrate the Cisco Algo Boost technology is the new Cisco Nexus 3548 full-featured switch which extends Cisco’s leadership in networking by pairing performance and low latency with innovations in visibility, automation, and time synchronization. And it is tightly integrated with the rich feature set of our Nexus Operating System, a proven operating system used in many of the world’s leading data centers, creating a truly differentiated offering.
For me, even though I am mostly a hardware geek, one of the coolest parts of the Cisco ONE launch at CiscoLive was the introduction of onePK. We see onePK as an core enabling technology that will have some cool stuff down the road.
So, one of the more common questions I get is about the relationship between onePK and other technologies related to network programmability such as OpenFlow (OF). Many folks mistakenly view this as an either/or choice. To be honest, when I first heard about onePK, I thought it was OpenFlow on steroids too; however, I had some fine folks from NOSTG educate me on the difference between the two. They are, in fact, complementary and for many customer scenarios, we expect them to be used in concert. Take a look at the pic below, which shows how these technologies map against the multi-layer model we introduced with Cisco ONE:
As you can see, onePK gives developers comprehensive, granular programmatic access to Cisco infrastructure through a broad set of APIs. One the other hand, protocols such as OpenFlow concern themselves with communications and control amongst the different layers—in OpenFlow’s case, between the control plane and the forwarding plane. Some folks have referred to onePK as a “northbound” interface and protocols such as OpenFlow as “southbound” interfaces. While that might be helpful to understand the difference between the two technologies, I don’t think that this is a strictly accurate description. For one thing, developers can use onePK to directly interact with the hardware. Second, our support for other protocols such as OpenFlow is delivered through agents that are built using onePK.
That last part, about the agent support is actually pretty cool. We can create agents to provide support for whatever new protocols come down the pike by building them upon onePK. This allows flexibility and future-proofing while still maintaining a common underlying infrastructure for consistency and coherency.
For instance, we are delivering our experimental OF support by building it atop the onePK infrastructure. For customers this is a key point, they are not locked into a single approach—they can concurrently use native onePK access, protocol-based access, or traditional access (aka run in hybrid mode) as their needs dictate. Because we are building agents atop onePK, you don’t have to forgo any of the sophistication of the underlying infrastructure. For example, with the forthcoming agent for the ASR9K, we expect to have industry leading performance because of the level of integration between the OF agents and the underlying hardware made possible by onePK.
In closing, you can see how extensible our programmatic support is with the ability to use onePK natively or to support technologies and protocols as they are developed and released. This gives customers a remarkable level of flexibility, extensibility and risk mitigation.
We updated our little corner of the Cisco YouTube page with some new playlist categories. Now its easy to find my favorite quick hits -- Networking 101.
Couple of great new videos have been up here recently as @jimmyray_purser rolls ‘em out!
Networking 101: What is an ASIC?
Is a custom application specific integrated circuit (ASIC) really a big deal in networking devices? Jimmy Ray Purser walks us through the difference between custom and full custom ASIC designs. Watch an overview of the steps involved in ASIC design. Learn the the difference between “programmable” ASICs and how to determine the difference between various ASIC models.
Networking 101: Quality of Service
How well do you know QoS? We all throw the term around but are we all truly comfortable with it? Jimmy Ray breaks it down at the packet level and shares the one rule you must never forget.
Networking 101: Switch Latency
Understanding how switch performance is measured can make the difference with application performance. The terminology of switch latency in various switching methods and the methodology to obtain the most accurate latency measurements make it easy to play games with the numbers. Watch this episode of Networking 101 with Jimmy Ray from TechWiseTV and arm yourself with knowledge.