Moving to Cloud? Intelligent Buffer Matters!
Craig Huitema and Soni Jiandani blogged about Cisco’s latest ASIC innovations for the Nexus 9K platforms and IDC did a write up and video. In this blog, I’ll expand on one component of the innovations, intelligent buffering. First let’s look at how switching ASICs maybe designed today. Most switching ASICs are built with on-chip buffer memory and/or off-chip buffer memory. The on-chip buffer size tends to differ from one ASIC type to another, and obviously, the buffer size tends to be limited by the die size and cost. Thus some designs leverage off-chip buffer to complement on-chip buffer but this may not be the most efficient way of designing and architecting an ASIC/switch.
This will lead us to another critical point: how can the switch ASIC handle TCP congestion control as well as the buffering impact to long-lived TCP and incast/microburst packets (a sudden spike in the amount of data going into the buffer due to lots of sources sending data to a particular output simultaneously. Some examples of that IP based storage as the object maybe spread across multiple nodes or search queries where a single request may go out on hundreds or thousands of nodes. In both scenarios the TCP congestion control doesn’t apply because it happens so quickly).
In this video, Tom Edsall summarizes this phenomenon and the challenges behind it.
Now, what have we done in our latest switch ASIC innovations to tackle these challenges? First of all, the latest ASIC innovations are based on 16nm fabrication technology – Industry’s first for switch ASICs compared to 28nm offerings from merchant vendors. This allowed us to bring more capabilities and scale while keeping cost under control and lowering power consumption.
The other innovations are around addressing network congestion challenges. How to support different types of traffic that will traverse the fabric like distributed IP-based storage, microburst, big data, etc. without performance impact?
Tom Edsall illustrates here two things to alleviate this challenge and the importance of intelligent buffering:
- Dynamic Packet Prioritization (DPP) – prioritizes small flows over large flows in transmit scheduling so that mice flows will be guaranteed for transmission without suffering packet losses due to buffer exhaustion or additional latency due to excessive queueing.
- Approximate Fair Drop (AFD) – introduces flow size awareness and fairness to the early drop congestion avoidance mechanism. Unlike WRED which treats all traffic flows equally within a given class, AFD is capable of differentiating large flows vs small flows (using elephant trap) in a class, and submit large (elephant) flows to the early-drop buffer threshold while leaving enough buffer headroom for small (mice) flows.
In addition, this Miercom test report (Cisco Systems Speeding Applications in Data Center Networks) puts the traditional simple buffer implementation and our new algorithm-based intelligent buffer architecture into the test with real-world traffic workloads, and proves that our intelligent buffer management based on DPP and AFD provides a better solution than just simply increasing the buffer size.
Another test report by Miercom around Big Data (Cisco Network Switch Impact on “Big Data” Hadoop-Cluster Data Processing) that compares the hadoop-cluster performance with switches of differing characteristics.
What’s in it for customers? For one they have a two year advantage to build a differentiated infrastructure that allows them to advance their business goals and outcome! They’ll have best in class infrastructure that can support various application and traffic types. App developers and DevOps teams can deliver the app performance at cloud scale providing ultimate user experience – an infrastructure for the long run.