Last week we participated in the annual Hadoop Summit held in San Jose, CA. When we first met with Hortonworks about the Summit many months back they mentioned this year’s Hadoop Summit would be promoting Reference Architectures from many companies in the Hadoop Ecosystem. This was great to hear as we had previously presented results from a large round of testing on Network and Compute Considerations for Hadoop at Hadoop World 2011 last November and we were looking to do a second round of testing to take our original findings and test/develop a set of best practices around them including failure and connectivity options. Further the set of validation demystifies the one key Enterprise ask “Can we use the same architecture/component for Hadoop deployments?”. Since a lot of the value of Hadoop is seen once it is integrated into current enterprise data models the goal of the testing was to not only define a reference architecture, but to define a set of best practices so Hadoop can be integrated into current enterprise architectures.
Below are the results of this new testing effort presented at Hadoop Summit, 2012. Thanks to Hortonworks for their collaboration throughout the testing.
Back in March we announced the third generation of UCS, with significant expansions to the I/O and systems management capabilities of the platform as well as a new lineup of servers. This month we’re continuing to expand the UCS server lineup with the addition of four new models. The latest batch of M3 systems are comprised of three Intel Xeon “EN” class machines (E5-2400 series processors) as well as a four socket “EP” (E5-2600 series) blade server. Specifically: the UCS B22 and B420 M3 blades and the C22 and C24 M3 rack servers. These new servers round out the UCS portfolio with an even stronger set of products optimized for scale-out and light general-purpose computing as well as a new price/performance 4S category in the mid-range.
If you prefer watching than reading , here is a nice conversation between Intel Boyd Davis , VP & GM, Data Center Infrastructure group, Cisco Jim McHugh, VP UCS Marketing, and Scott Ciccone, Sr. Product Marketing Manager, highlighting the key benefits of these new models.
To figure out how these fit in, let’s step back and consider the broader evolution of server technology in play here:
1) Cisco has made server I/O more powerful and much simpler.
One of the key differentiators of UCS is the way in which high-capacity server network access has been aggregated through Cisco Virtual Interface Cards and infused with built-in high performance virtual networking capabilities. In “pre-UCS” server system architectures, one of the main design considerations was the type and quantity of physical network adapters required. Networking, combined with computing sockets/cores/frequency/cache, system memory, and local disk are historically the primary resources considered in the balancing act of cost, physical space and power consumption, all of which are manifested in the various permutations of server designs required to cover the myriad of workloads most efficiently. Think of these as your four server subsystem food groups. Architecture purists will remind us that everything outside the processors and their cache falls into the category of “I/O” but let’s not get pedantic because that will mess up my food group analogy. In UCS, I/O is effectively taken off the table as a design worry because every server gets its full USRDA of networking through the VIC: helping portions of bandwidth, rich with Fabric Extender technology vitamins that yield hundreds of Ethernet and FC adapters through one physical device. Gone are the days of hemming and hawing over how many mezz card slots your blade has or how many cards you’re going to need to feed that hungry stack of VM’s on your rack server. This simplification changes things for the better because it takes a lot of complication out of the equation.
Earlier this year I wrote a blog titled “Feeling the need for speed”, which highlighted the ongoing performance and port speed evolution on the Nexus 7000 platform. These and other performance enhancements on the Nexus 7000 focused on the data plane, essentially making packet switching faster. Over the last four years, we’ve increased the per slot switching capacity from 80G to 550G with the introduction of new fabric and I/O modules. If you do the math, the Nexus 7000 data plane can now support up to 17.6Tbps, that’s a lot of bits flowing through the switch.
So, to keep up with this dramatic increase in the data plane speed, we’re introducing two new supervisors, the Sup2E and the Sup2 to boost the Nexus 7000 control plane performance and scale.
In the Nexus 7000, the Supervisor is essentially the control plane. It handles all the control plane and management functions such as Layer 2 and 3 services, redundancy capabilities, configuration management, status monitoring, power and environmental management, and much more.
To handle all these functions and be able to scale to meet the growing demands of data centers, the new supervisors are built with significantly faster CPUs and increased memory. They also offer two key new features, FCoE enablement on the F2 Series modules and VDC CPU shares, which lets you set CPU priority on a VDC basis.
The Sup2E is designed for the broadest network deployments and the highest investment protection. With daul quad-core processers and 32GB of memory, it delivers the highest performance and scale. From a pure CPU performance perspective, the Sup2E delivers 4 times the performance of the current Sup1. This increase enables faster routing and STP convergence times and increased VDC and FEX scale. With the current software release, you can configure up to 8 VDCs, plus 1 admin VDC and connect up to 48 Nexus 2000 Switches (10GE version) per Nexus 7000.
With a quad-core processer and 12GB of memory, the Sup2 is ideal for small and medium sized deployments. Even though it’s double the CPU performance compared to a Sup1, it delivers similar feature scale. However, it offers faster control plane performance and added features for the same price point as the Sup1.
Here’s a table that provides a high-level summary of the three Nexus 7000 supervisors.
So, with the introduction of the new supervisors, you’re no longer limited to a one-size fits all Supervisor selection. You can now choose the right supervisor based on the size of your network deployment and the place in the network.
For more detail on the new Supervisors, I encourage you to check out the Sup 2/2E datasheet posted on the Nexus 7000 page.
Where were you in 1998? Somewhere in one of our customers, a customer booted one of our 3640 routers, and it’s been running ever since without a reboot!
It’s been running since last century! Wow. It’s been running since around the time my daughter was born, and a good few years before my son was born! It’s been running longer that some of our competitors have been in existence, and longer than Juniper Networks has been a publicly traded company!
I learned this from an email was passed around my office, that highlighted this remarkable evidence of reliability. It made me wonder, in your data center, what is your longest running piece of Cisco data center equipment?
And it also reminded me of some of our best practices for network reliability, such as Cisco Smart Services, described in this short VoD:
So now for the evidence. As you can see from the “show version” Cisco IOS output below ……
As the Product Manager for Fibre Channel over Ethernet (FCoE), I often get asked some of the hard questions about how the technology works. Sometimes I get asked the easy questions. Sometimes – like two nights ago – I get asked if the standards for FCoE are done.
I’m not kidding.
My own expectations for discussing FCoE were focused around the topics and conversations that we’ve been seeing over the last year, since the last Cisco Live in 2011. Read More »