Over the last 30 years the Internet has transformed multiple times. Most of us take it for granted these days. We expect to watch videos on Netflix, run our meetings over WebEx, talk to our friends across the globe on Skype, and have access whether we’re at work, home, or on the go. But we forget that the Internet wasn’t originally built for this – it’s been barely 20 years since email, the World Wide Web, and always-on network access have become realities. The changes have occurred at a dizzying pace.
In the beginning the only way to handle the work of the Internet – routing and forwarding packets – was by using general-purpose computer chips. This didn’t last long as the explosive growth in network bandwidth drove Cisco and other infrastructure providers to use more customized silicon. Indeed, Cisco’s market success was driven in large part by our ability to offer industry-leading solutions with the best combination of price, performance, and capabilities. This in turn was fueled by Cisco’s use of internally developed network silicon using advanced ASIC development models ahead of competitors who continued to rely on general purpose CPUs or FPGAs to power their products.
At Cisco Live London, Cisco unveiled Wired & Wireless convergence, along with its associated products, the Wireless LAN Controller 5760 and the Catalyst Switch 3850 with built-in Wireless Controller. While on the expo floor explaining the newly introduced ‘converged access’ to our customers, I had some interesting conversations that I thought might be cool to share with you. There may be some paraphrasing here, but if my conversation became a screenplay, it would have looked like this:
The Cisco Live! London expo show floor is throbbing with excitement, customers browse the many demos that are around the World of Solutions arena.
NAT, Wireless Controller 5760 Product Manager, stands at a demo booth with the new controller.
CUSTOMER 1 ambles over.
I heard about the converged access and it sounds very interesting. Why should I consider 5760 controller?
Do you have bandwidth hungry applications such as video / multimedia applications used by your wireless users?
When playing in the high speed switching game -- timing is everything. Timing ‘sets the pace’ for visibility to established the ‘where and when,’ correlation across a broad computing environment plus compliance and digital forensics with precision time stamps. Every element of the data center requires accurate timing at a level that leaves no room for error.
Speed is the other, more celebrated, if not obvious requirement, for the high speed switching game. Speed that is measured in increments requiring some new additions to my vocabulary.
When looking at the ways in which we measure speed and regulate time throughout the network, I was of course familiar with NTP or Network Time Protocol. NTP provides millisecond timing…which, crazy enough…is WAY TOO SLOW for this high speed market. Now being from the South, I may blink a little slower than other people but I read that the average time it takes to blink an eye…is 300 to 400 milliseconds! A millisecond is a thousandth of a second. That is considered slow?
Turns out ‘micro-second’ level detail is our next consideration. A microsecond is equal to one millionth (10−6 or 1/1,000,000) of a second. One microsecond is to one second as one second is to 11.54 days. To keep our blinking example alive: 350,000 microseconds. Still too slow.
Next unit of measure? The Nanosecond. A nanosecond is one billionth of a second. One nanosecond is to one second as one second is to 31.7 years. Time to blink is just silly at this point.
At one point in time I used to think higher speeds were attainable with higher degrees of bandwidth. This may be why the idea of ‘low latency’ seems so counter-intuitive. As you hopefully understand at this point, there are limitations to how fast data can move and that real gains in this area can only be achieved through gains in efficiency -- in other words, the elimination (as much as possible) of latency.
For ethernet, speed really is about latency. Ethernet switch latency is defined as the time it takes for a switch to forward a packet from its ingress port to its egress port. The lower the latency, the faster the device can transmit packets to its final destination. Also important within this ‘need for speed’ is avoiding packet loss. The magic is in within the balancing act: speed and accuracy that challenge our understanding of traditional physics.
Cisco’s latest entrant to the world of high speed trading brings us the Nexus 3548. A slim 48 port line rate switch with latency as low as 190 nanoseconds. It includes a Warp switch port analyzer (SPAN) feature that facilitates the efficient delivery of stock market data to financial trading servers in as littles as 50 nanoseconds and multiple other tweaks we uncover in this 1 hour deep dive into the fastest switch on the market. The first new member of the 2nd generation Nexus 3000 family. (We featured the first generation Nexus 3000 series in April 2011)
This is a great show -- it moves fast!
- Robb & Jimmy Ray with Keys to the Show
- Berna Devrim introduces us to Cisco Algo Boost and the Nexus 3548
- Will Ochandarena gives us a hardware show and tell
- Jacob Rapp walks us through a few live simulations
- Chih-Tsung, ASIC designer walks us through the custom silicon
Today, at the High Performance Computing for Wall Street event, we announced Cisco Algorithm Boost or Algo Boost technology, a groundbreaking networking innovation with numerous patents pending, that offers the highest speed, visibility and monitoring capabilities in the networking industry. A true game changer delivering competitive advantage to our customers!
Ideal for high performance trading, big data and high performance computing environments, this new technology offers network access performance as low as 190 nanoseconds, more than 60% faster than other full featured Ethernet switches. When your business success is determined by nanoseconds, this is a huge gain!
The first switch to integrate the Cisco Algo Boost technology is the new Cisco Nexus 3548 full-featured switch which extends Cisco’s leadership in networking by pairing performance and low latency with innovations in visibility, automation, and time synchronization. And it is tightly integrated with the rich feature set of our Nexus Operating System, a proven operating system used in many of the world’s leading data centers, creating a truly differentiated offering.
For me, even though I am mostly a hardware geek, one of the coolest parts of the Cisco ONE launch at CiscoLive was the introduction of onePK. We see onePK as an core enabling technology that will have some cool stuff down the road.
So, one of the more common questions I get is about the relationship between onePK and other technologies related to network programmability such as OpenFlow (OF). Many folks mistakenly view this as an either/or choice. To be honest, when I first heard about onePK, I thought it was OpenFlow on steroids too; however, I had some fine folks from NOSTG educate me on the difference between the two. They are, in fact, complementary and for many customer scenarios, we expect them to be used in concert. Take a look at the pic below, which shows how these technologies map against the multi-layer model we introduced with Cisco ONE:
As you can see, onePK gives developers comprehensive, granular programmatic access to Cisco infrastructure through a broad set of APIs. One the other hand, protocols such as OpenFlow concern themselves with communications and control amongst the different layers—in OpenFlow’s case, between the control plane and the forwarding plane. Some folks have referred to onePK as a “northbound” interface and protocols such as OpenFlow as “southbound” interfaces. While that might be helpful to understand the difference between the two technologies, I don’t think that this is a strictly accurate description. For one thing, developers can use onePK to directly interact with the hardware. Second, our support for other protocols such as OpenFlow is delivered through agents that are built using onePK.
That last part, about the agent support is actually pretty cool. We can create agents to provide support for whatever new protocols come down the pike by building them upon onePK. This allows flexibility and future-proofing while still maintaining a common underlying infrastructure for consistency and coherency.
For instance, we are delivering our experimental OF support by building it atop the onePK infrastructure. For customers this is a key point, they are not locked into a single approach—they can concurrently use native onePK access, protocol-based access, or traditional access (aka run in hybrid mode) as their needs dictate. Because we are building agents atop onePK, you don’t have to forgo any of the sophistication of the underlying infrastructure. For example, with the forthcoming agent for the ASR9K, we expect to have industry leading performance because of the level of integration between the OF agents and the underlying hardware made possible by onePK.
In closing, you can see how extensible our programmatic support is with the ability to use onePK natively or to support technologies and protocols as they are developed and released. This gives customers a remarkable level of flexibility, extensibility and risk mitigation.