The new NVIDIA A100 Tensor Core GPU, the first elastic, multi-instance GPU that unifies data analytics, training, inference and HPC, will allow Cisco customers to better utilize their accelerated resources for AI workloads.
As AI workloads mature, the need for hardware acceleration has increased and become more refined. Enterprises need to be judicious in their infrastructure investments and their IT department must make sure that it provides the right amount of acceleration to the right workload. With the introduction of the NVIDIA A100 Tensor Core GPU, NVIDIA delivers unprecedented acceleration and flexibility for AI and data analytics.
Cisco continued support for NVIDIA GPUs
In addition to Cisco’s current support for NVIDIA T4 and V100 GPUs, Cisco plans to support the new NVIDIA A100 in its Cisco Unified Computing System (UCS) servers and in its hyperconverged infrastructure system, Cisco HyperFlex. This will enable our customers to meet their different acceleration needs for their data center workloads while maximizing their infrastructure utilization.
Inferencing usually requires less parallel processing than training and often occurs at the edge, close to the data source. The versatility of the NVIDIA A100 GPU will allow Cisco customers to better allocate their accelerated resources among a variety of workloads thus maximizing their GPU investments. With the A100 GPU’s new multi-instance GPU (MIG) technology, which can partition a physical GPU into up to 7 isolated GPU instances, IT managers will be able to allocate resources much more efficiently.
Cisco has long been at the forefront of providing IT infrastructure solutions for data lakes and Hadoop workloads. As the amount of data ingested and analyzed keeps on growing exponentially and the analytics processes running on these data sets become increasingly more complex, the need for GPU accelerations has entered the world of Hadoop and big data analytics. The recent addition of GPU support in Apache Spark 3.0 is a testament to these changes.
Read how Cisco Data Intelligence Platform and NVIDIA GPUs are accelerating distributed deep learning with Apache Spark 3.0: Accelerated Deep Learning with Apache Spark blog
One of the toughest challenges for enterprises trying to get business value out of their AI projects, is operationalizing their AI applications. One hurdle IT departments face in deploying infrastructure in support of AI applications is how to properly allocate resources and balance the needs of the many workloads in their data centers. The introduction of MIG in addition to virtualization capabilities provided by NVIDIA vComputeServer will allow IT administrators granular control of their GPU resources thus ensuring optimal ROI for their Cisco UCS and HyperFlex systems.
Cisco UCS and HyperFlex will continue to deliver performance and versatility for running AI at scale with the addition of the NVIDIA A100 GPU to its portfolio of supported GPUs. This will enable our customers to get speedier analytics with larger data sets and get to insights faster and more economically.
For more information
Read NVIDIA’s announcement about the NVIDIA A100 Tensor Core GPU
Visit Cisco AI/ML solution page for more information
Visit Cisco and NVIDIA global partnership page to learn more about all our joint solutions.
Connect on Twitter: @FrancoiseBRees