Avatar

Introduction

The new NVIDIA A100 Tensor Core GPU provides unprecedented performance to the world’s data centers to accelerate data analytics, deep learning and training, inference and HPC. Cisco is bringing NVIDIA A100 PCIe support to its servers for Cisco customers seeking performance and scalability with AI workloads.

Overview

AI workloads and their datasets are expanding at an exponential rate, making hardware acceleration an essential part of modern data centers. Enterprises must choose their infrastructure investments carefully to provide the right amount of acceleration for the right workload. With the introduction of the NVIDIA A100 Tensor Core GPU, NVIDIA delivers next-level acceleration and flexibility for AI and data analytics.

Cisco’s ongoing support for NVIDIA GPUs

In addition to Cisco’s current support for NVIDIA T4, V100 and RTX GPUs, Cisco plans to support the new NVIDIA A100 in its Cisco Unified Computing System (UCS) servers and in its hyperconverged infrastructure system, Cisco HyperFlex. These solutions will enable our customers to meet the diverse acceleration needs for data center AI workloads while maximizing infrastructure utilization.  The A100 PCIe configuration brings all the capabilities of the A100 GPU in a more power-efficient design.  Suitable for rack servers that provide great performance for applications using 1 or 2 GPUs at a time.

NVIDIA A100 Tensor Core GPU – PCIe form factor

NGC-Ready systems offered by Cisco are built for AI workloads and tested for functionality and performance with GPU-optimized AI software from NVIDIA’s NGC registry, giving system administrators the confidence to deploy these servers to run AI applications. These NGC-Ready systems will soon be offered with NVIDIA A100.

Inferencing

Inferencing often will require less parallel processing than training and usually occurs at the edge, close to the incoming data source. The versatility of the NVIDIA A100 GPU allows Cisco customers to better allocate their accelerated resources among a variety of workloads thus maximizing their GPU investments. Enterprises can allocate resources much more efficiently with the A100 GPU’s new multi-instance GPU (MIG) technology, which can partition a physical GPU into up to seven isolated GPU instances.

Data analytics

Cisco has long been at the forefront of providing IT infrastructure solutions for data lakes and Hadoop workloads. As the amount of data ingested and analyzed keeps on growing exponentially and the analytics processes running on these data sets become increasingly more complex, the need for GPU acceleration has entered the world of Hadoop and big data analytics. The recent addition of GPU support in Apache Spark 3.0 is a testament to these changes.

Read how Cisco Data Intelligence Platform and NVIDIA GPUs are accelerating distributed deep learning with Apache Spark 3.0: Accelerated Deep Learning with Apache Spark blog

Enterprise grade

Among the most difficult challenges for enterprises today is extracting business value out of their AI projects and operationalizing their AI applications. IT departments must deploy infrastructure in support of AI applications while properly allocating resources and balancing the needs of the many workloads in their data centers. The introduction of MIG in addition to virtualization capabilities provided by NVIDIA vComputeServer will allow IT administrators granular control of their GPU resources thus ensuring optimal ROI for their Cisco UCS and HyperFlex systems.

In summary

Cisco UCS and HyperFlex will continue to deliver performance, scalability, and versatility to AI workloads at scale with the addition of the PCIe version of the NVIDIA A100 GPU to its portfolio of supported GPUs. These new offerings will enable our customers to achieve greater degrees of performance with larger data sets and realize insights faster and more economically.

For more information

Read about the NVIDIA A100 Tensor Core GPU

Visit Cisco AI/ML solution page for more information

Visit Cisco and NVIDIA global partnership page to learn more about all our joint solutions.

Connect with me on Twitter @pteel or on LinkedIn



Authors

Paul Teel

Business Development Manager - Citrix

Global Partner Organization