Below are the results of this new testing effort presented at Hadoop Summit, 2012. Thanks to Hortonworks for their collaboration throughout the testing.
Back in March we announced the third generation of UCS, with significant expansions to the I/O and systems management capabilities of the platform as well as a new lineup of servers. This month we’re continuing to expand the UCS server lineup with the addition of four new models. The latest batch of M3 systems are comprised of three Intel Xeon “EN” class machines (E5-2400 series processors) as well as a four socket “EP” (E5-2600 series) blade server. Specifically: the UCS B22 and B420 M3 blades and the C22 and C24 M3 rack servers. These new servers round out the UCS portfolio with an even stronger set of products optimized for scale-out and light general-purpose computing as well as a new price/performance 4S category in the mid-range.
If you prefer watching than reading , here is a nice conversation between Intel Boyd Davis , VP & GM, Data Center Infrastructure group, Cisco Jim McHugh, VP UCS Marketing, and Scott Ciccone, Sr. Product Marketing Manager, highlighting the key benefits of these new models.
To figure out how these fit in, let’s step back and consider the broader evolution of server technology in play here:
1) Cisco has made server I/O more powerful and much simpler.
One of the key differentiators of UCS is the way in which high-capacity server network access has been aggregated through Cisco Virtual Interface Cards and infused with built-in high performance virtual networking capabilities. In “pre-UCS” server system architectures, one of the main design considerations was the type and quantity of physical network adapters required. Networking, combined with computing sockets/cores/frequency/cache, system memory, and local disk are historically the primary resources considered in the balancing act of cost, physical space and power consumption, all of which are manifested in the various permutations of server designs required to cover the myriad of workloads most efficiently. Think of these as your four server subsystem food groups. Architecture purists will remind us that everything outside the processors and their cache falls into the category of “I/O” but let’s not get pedantic because that will mess up my food group analogy. In UCS, I/O is effectively taken off the table as a design worry because every server gets its full USRDA of networking through the VIC: helping portions of bandwidth, rich with Fabric Extender technology vitamins that yield hundreds of Ethernet and FC adapters through one physical device. Gone are the days of hemming and hawing over how many mezz card slots your blade has or how many cards you’re going to need to feed that hungry stack of VM’s on your rack server. This simplification changes things for the better because it takes a lot of complication out of the equation.
Read More »
Tags: data center, Servers, UCS, UDC, unified computing, unified computing system
Earlier this year I wrote a blog titled “Feeling the need for speed”, which highlighted the ongoing performance and port speed evolution on the Nexus 7000 platform. These and other performance enhancements on the Nexus 7000 focused on the data plane, essentially making packet switching faster. Over the last four years, we’ve increased the per slot switching capacity from 80G to 550G with the introduction of new fabric and I/O modules. If you do the math, the Nexus 7000 data plane can now support up to 17.6Tbps, that’s a lot of bits flowing through the switch.
So, to keep up with this dramatic increase in the data plane speed, we’re introducing two new supervisors, the Sup2E and the Sup2 to boost the Nexus 7000 control plane performance and scale.
In the Nexus 7000, the Supervisor is essentially the control plane. It handles all the control plane and management functions such as Layer 2 and 3 services, redundancy capabilities, configuration management, status monitoring, power and environmental management, and much more.
To handle all these functions and be able to scale to meet the growing demands of data centers, the new supervisors are built with significantly faster CPUs and increased memory. They also offer two key new features, FCoE enablement on the F2 Series modules and VDC CPU shares, which lets you set CPU priority on a VDC basis.
The Sup2E is designed for the broadest network deployments and the highest investment protection. With daul quad-core processers and 32GB of memory, it delivers the highest performance and scale. From a pure CPU performance perspective, the Sup2E delivers 4 times the performance of the current Sup1. This increase enables faster routing and STP convergence times and increased VDC and FEX scale. With the current software release, you can configure up to 8 VDCs, plus 1 admin VDC and connect up to 48 Nexus 2000 Switches (10GE version) per Nexus 7000.
With a quad-core processer and 12GB of memory, the Sup2 is ideal for small and medium sized deployments. Even though it’s double the CPU performance compared to a Sup1, it delivers similar feature scale. However, it offers faster control plane performance and added features for the same price point as the Sup1.
Here’s a table that provides a high-level summary of the three Nexus 7000 supervisors.
So, with the introduction of the new supervisors, you’re no longer limited to a one-size fits all Supervisor selection. You can now choose the right supervisor based on the size of your network deployment and the place in the network.
For more detail on the new Supervisors, I encourage you to check out the Sup 2/2E datasheet posted on the Nexus 7000 page.
Tags: Nexus 7000, Sup2, Sup2E, Supervisor2, Supervisor2E