Avatar

Today, we’re exploring how Ethernet compares with InfiniBand in AI/ML environments, focusing on how Cisco Silicon One™ manages network congestion and enhances performance for AI/ML workloads. This post emphasizes the importance of benchmarking and key performance indicator (KPI) metrics in evaluating network solutions, showcasing the Cisco Zeus Cluster equipped with 128 NVIDIA® H100 graphics processing units (GPUs) and cutting-edge congestion management technologies like dynamic load balancing (DLB) and packet spray.

Networking standards to meet the needs of AI/ML workloads

AI/ML training workloads generate repetitive micro-congestion, stressing network buffers significantly. The east-to-west GPU-to-GPU traffic during model training demands a low-latency, lossless network fabric. InfiniBand has been a dominant technology in the high-performance computing (HPC) environment, and lately in the AI/ML environment.

Ethernet is a mature alternative, with advanced features that can address the rigorous demands of AI/ML training workloads, and Cisco Silicon One can effectively execute load balancing and manage congestion. We set out to benchmark and compare Cisco Silicon One alongside NVIDIA Spectrum-X™ and InfiniBand.

Evaluation of network fabric solutions for AI/ML

Network traffic patterns vary based on model size, architecture, and parallelization techniques used in accelerated training. To evaluate AI/ML network fabric solutions, we identified relevant benchmarks and KPI metrics for AI/ML workload and infrastructure teams because they view performance through different lenses.
We established comprehensive tests to measure performance and generate metrics specific to AI/ML workload and infrastructure teams. For these tests, we used the Cisco Zeus Cluster, featuring dedicated back end and storage with a standard three-stage leaf-spine Clos fabric network built with Cisco Silicon One-based platforms and 128 NVIDIA H100 GPUs (see Figure 1).

Figure 1. Cisco Zeus Cluster topology

We developed benchmarking suites using open-source and industry-standard tools contributed by NVIDIA and others. Our benchmarking suites included the following (see also Table 1):

  • Remote direct memory access (RDMA) benchmarks—built using IBPerf utilities—to evaluate network performance during congestion created by incast
  • NVIDIA Collective Communication Library (NCCL) benchmarks, which evaluate application throughput during the training and inference communication phase among GPUs
  • MLCommons MLPerf set of benchmarks, which evaluates the most understood metrics, job completion time (JCT), and tokens per second by the workload teams
Table 1. Benchmarking key performance indicator (KPI) metrics

Legend:

JCT = Job completion time
BusBW = Bus bandwidth
ECN/PFC = Explicit congestion notification and priority flow control

NCCL benchmarking and congestion avoidance features

Congestion builds up during the back propagation stage of the training process, where a gradient sync is required among the GPUs participating in training. As the model size increases, so does gradient size and number of GPUs. This creates massive micro-congestion in the network fabric. Figure 2 shows JCT results and traffic distribution benchmarking. Note how Cisco Silicon One supports a set of advanced features for congestion avoidance, such as DLB and packet spray techniques, and data center quantized congestion notification (DCQCN) for congestion management.

Figure 2. NCCL benchmark–JCT and traffic distribution

Figure 2 illustrates how NCCL benchmarks stack up using different congestion avoidance features. We tested the most common collectives with multiple different message sizes to highlight these metrics. The results show that JCT improves with DLB and packet spray for All-to-All, which causes the most congestion due to the nature of communication. Although JCT is the most understood metric from an application perspective, it doesn’t show how effectively the network is utilized—something the infrastructure team needs to know. This knowledge could help them:

  • Improve network utilization to get better JCT
  • Understand how many workloads can share the network fabric without adversely impacting JCT
  • Plan for capacity as use cases increase

To gauge network fabric utilization, we calculated Jain’s Fairness Index, where LinkTxᵢ is the amount of transmitted traffic on the fabric link:

The index value ranges from 0.0 to 1.0, with higher values being better. A value of 1.0 represents the perfect distribution. The traffic distribution on the fabric links chart in Figure 2 shows how DLB and packet spray algorithms create a near-perfect Jain’s Fairness Index. Equal-cost multi-path (ECMP) uses static hashing, and depending on flow entropy, it can lead to traffic polarization, causing micro-congestion and negatively affecting JCT.

Silicon One, NVIDIA Spectrum, and InfiniBand

The NCCL benchmark—competitive analysis (Figure 3) shows how Cisco Silicon One performs alongside NVIDIA Spectrum Ethernet and InfiniBand technologies. The data for NVIDIA was taken from the SemiAnalysis publication. Note that Cisco does not know how these tests were performed, but we do know that cluster size and GPU-to-network-fabric connectivity is similar to the Cisco Zeus Cluster.

Figure 3. NCCL benchmark–competitive analysis

Bus bandwidth (busBW) benchmarks the performance of collective communication by measuring the speed of operations involving multiple GPUs. Each collective has a specific mathematical equation reported during benchmarking. Figure 3 shows that Cisco Silicon One–All Reduce performs comparably to NVIDIA Spectrum-X and InfiniBand across various message sizes.

Network fabric performance assessment

The IBPerf benchmark compares remote direct memory access (RDMA) performance against ECMP, DLB, and packet spray, which are crucial for assessing network fabric performance. Incast scenarios, where multiple GPUs send data to one GPU, often cause congestion. We simulated these conditions using IBPerf tools.

Figure 4. IBPerf benchmark–RDMA performance

Figure 4 shows how aggregated session throughput and JCT respond to different congestion avoidance algorithms: ECMP, DLB, and packet spray. DLB and packet spray reach link bandwidth, improving JCT. It also illustrates how DCQCN handles micro-congestions, with priority flow control (PFC) and explicit congestion notification (ECN) ratios improving with DLB and significantly dropping with packet spray. Although JCT improves slightly from DLB to packet spray, the ECN ratio drops dramatically due to packet spray’s ideal traffic distribution.

Training and inference benchmark

MLPerf benchmark–training and inference, published by the MLCommons organization, aims to enable fair comparison of AI/ML systems and solutions.

Figure 5. MLPerf benchmark–training and inference

We focused on AI/ML data center solutions by executing training and inference benchmarks. To achieve optimal results, we extensively tuned across compute, storage, and networking components using the congestion management features of Cisco Silicon One. Figure 5 shows comparable performance across various platform vendors while Cisco Silicon One with Ethernet performs like other vendor solutions for Ethernet.

Conclusion

Our deep dive into Ethernet and InfiniBand within AI/ML environments highlights the remarkable prowess of Cisco Silicon One in tackling congestion and boosting performance.

Looking at this data, it’s clear that the options reviewed can provide excellent results for our customers. Cisco and NVIDIA will continue to work together to evolve solutions for our customers within an overall common Spectrum-X architecture.

Many thanks to Vijay Tapaskar, Will Eatherton, and Kevin Wollenweber for their support in this benchmarking process.

Explore secure AI infrastructure

Discover the secure, scalable, and high-performance AI infrastructure you need to develop, deploy, and manage AI workloads securely when you choose Cisco Secure AI Factory with NVIDIA.

 

Explore Cisco Secure AI Factory with NVIDIA



Authors

Rakesh Kumar

Senior Director of Engineering

AI/ML Infrastructure Solutions and Benchmarking