Avatar

Cisco UCS X-Series was designed to be modular, expandable, and blur the lines between the efficiencies of blade servers (cable reduction and power & cooling) and the expandability of rack servers (storage & PCIe). It all starts with the Cisco UCS X210c Compute Node.

 

The Brains

Intel Xeon Platinum logo

 

The Cisco UCS X210c M6 Compute Node is based on 3rd Gen Intel® Xeon® Scalable processors. With 8 to 40 powerful cores and a wide range of frequency, feature, and power levels it is the only data center CPU with built in AI acceleration, these processors are ready for the most demanding workloads. A full chassis of X210c M6 servers can have 600+ cores.

 

Memory

Supporting 32 DDR4, 3200 MHz DIMMs (16 per socket) provides a huge memory footprint. With standard memory, the X210c M6 supports up to 8TB. When paired with Intel Optane™ Persistent Memory 200 Series modules, the server supports up to 12TB of memory. In the past, applications like in memory databases that required a large amount of memory were run on 4-socket servers. Now many of those applications can run on 2-socket servers like the UCS X210c with its larger memory footprint. A full chassis of X210c M6 servers can have 96TB of memory.

Cisco UCS X210c M6 Comptue Node

Local Storage

With up to six NVMe/SAS/SATA hot-plug drives and capacities of up to 15.3TB, the X210c can meet many of the storage needs that previously required a 1RU rack server. Additionally, two M.2 boot drives, with optional RAID, can be used for the hypervisor or OS. Soon, the capacity will be expanded with 30TB drives offering more than 1PB of storage in a single chassis.

 

Cisco 14000 Series VICs (Virtual Interface Cards)

More than a NIC, VICs have always been the secret sauce for UCS servers. They provide the Unified Fabric (Ethernet for data, FCoE for storage, and management) connectivity for all modular servers and UCS C-Series rack servers.

The UCS VIC 14425 mLOM provides up to 50Gbps of bandwidth per IFM (Cisco UCS 9108 25G Intelligent Fabric Module; our next blog!). With two IFMs per chassis, that is 100G of bandwidth per server. If that’s not enough, add the VIC 14825 mezzanine adapter for up to 200 Gbps per blade. The VIC 14825 will also provide PCIe connectivity to future UCS X-Fabric modules.

What does this mean for your applications? If it needs maximum bandwidth, a single virtual NIC (vNIC) can be allocated 25Gpbs. And you can have more than one! If the application is more balanced and needs access to Fibre Channel storage (think FlexPod or FlashStack) you can have multiple vNICs & vHBAs allocating that 200Gbps to ensure all the traffic gets the resources it needs.

 

What’s Next

Come back next week as we discuss the Unified Fabric and the IFM. Can’t wait until then, check out the links in the blog for more information on X-Series.

 


Resources:



Authors

Bill Shields

Senior Marketing Manager

Product and Solutions Marketing Team