Avatar

There have been countless amounts of industry research and many volumes written about data visibility and analytics with the advent of the Big Data Era. However, these discussions rarely delve into visibility and analytics requirements for storage and storage networking technologies. This is somewhat ironic because Big Data term itself was invented as an expression of the challenges and opportunities associated with exponential growth in data storage.

In this blog, we’ll attempt to make good on this oversight with a look back and forward at the importance of visibility and analytics for optimizing storage networking. This importance cannot be understated as storage networking technologies have evolved dramatically in terms of scale and performance and as new technologies have emerged such as NVMe over fabrics. Today we operate in realms of millions of input/out per second (IOPS) and in response times measured in microseconds,  faster than a blink of an eye. As such, the need for deep visibility and advanced analytics has never been greater to overcome the risks of the “unknown”.

Paresh Gupta, in this video, walks us through the Past, Present and the Future of Storage Traffic Visibility and Analytics.

https://youtu.be/Is66zQn1GYM

First, a look back at what was possible before. The very first offering in terms of storage traffic visibility was based on a reactive approach using inline traffic analyzers. This dates back to the same time as the launch of 1-Gbps Fibre Channel and these devices were designed to capture optical signal on one port and then regenerate the same signal on another port. Users could see all the Fibre Channel primitives at the physical layer of the network as well as the control and end-to-end data frames flowing between the initiator and the target.

As a problem was suspected, the storage admin could introduce an inline traffic analyzer between the two neighboring devices for further investigation. This could involve downtime of that network segment due to re-cabling until the signal regenerated and the two neighboring devices became agnostic of the analyzer’s presence. This admittedly primitive architecture worked great for more than two decades to attack hard problems, which can only be solved by inspecting every single bit on the wire. The limitations included incurring downtime of the network segment, high costs, scaling issues and the overall reactive approach.

 

Another offering was based on Fibre Channel frames replication on switch ports, which dates back to 2003 when Cisco launched MDS 9000 switches and was named Switch Port Analyzer, or SPAN. SPAN takes the frames from a switchport and replicates them to another port without any performance impact to the normal switching path. Cisco also made available special port adapters which could be connected to Cisco MDS Fibre Channel switches to receive SPAN traffic. This external port adapter then encapsulated the Fibre Channel frames into Ethernet framing to be consumed by a laptop or workstation via a NIC. The famous Wireshark, or Ethereal as it was known at that time, could be used to get deep visibility into control and data frames. This approach did not involve downtime of the network segment. However, it was still a reactive approach, and involved re-cabling and dedicated switch ports to send out traffic.

Now, let’s look at what is possible today. Today customers who require deep visibility into Fibre Channel traffic generally rely on the use of external hardware taps, analyzers, and appliances for visibility and analytics. Taps are connected inline in the data path by re-cabling the existing deployment and reflect a portion of the light to a dedicated analyzer while rest of the signal remains untouched.

Analyzers receive a complete copy of the flowing bits on the wire, which extracts information based on the control and data frames and also the Fibre Channel primitives. This solution offers end-to-end storage traffic deep visibility and analytics and provides storage admins key performance metrics such as how long a read or write command is taking to complete, how long a storage array is taking to respond to a read or write request, how many IO are completed per second, and error reporting and correlation.  Taps and Analyzers were good at provided deep visibility but at the expense of overhead of re-cabling as well as the costs and operational burdens of managing external devices.

 

The next approach was through visibility that’s integrated into the SAN switches themselves all the way into the SCSI data packet level. This allows storage admins to inspect an end-to-end flow between an initiator, target, and LUN. However, this approach is limited in terms of scale as the solution is enabled by an external device, which receives data traffic from existing switch ports via an approach similar to SPAN. Additionally, this approach still involves re-cabling to SPAN the traffic to an external device, which can result in high cost and other overhead factors. 

Now, let’s look at what will be soon possible. Cisco recently introduced an architecture that will eliminate the need for any external device through integrated-by-design capabilities on the SAN switches. The new analytics offerings will take advantage of recent advancements Cisco has made at the hardware level, enabling the port ASICs to tap traffic at line-rate of 32-Gbps and higher speeds without any performance or latency impact to the switched frames. These new switches also leverage on-board Network Processing Units or NPUs featuring dedicated multi-core and multi-GHz of computing power to extract intelligent metrics from the Fibre Channel and the SCSI headers.

This solution will arm storage admins with information spread across multiple frames by correlating on the switches using the on-board NPU. The solution is designed to be always-on while inspecting every frame of every flow at any speed. Admins can deploy the solution with a single-click and seamlessly scale to every end-device connected to their Fibre Channel fabrics. Finally, fabric-wide correlation and visualization will be provided by existing software and third-party tools.

 

Summary: As  technologies such as Flash arrays and NVMe over fabrics become even more prevalent, deep visibility and analytics will play an increasingly important role in meeting the strict service level agreements. Deep visibility will help to maintain peak performance while analytics will enable proactive and predictive operations. By integrating deep visibility and analytics in the MDS 9000 switches, Cisco is shaping the future of storage networking industry for now and many years to come.

For more info :  Cisco 32G Announcement

Tony Antony
Sr Marketing Manager

 



Authors

Tony Antony

Marketing

Solutions