This week we’re announcing new systems at the upper end of the UCS server product line: some heavy-duty iron for heavy-duty times. These are important new tools for our UCS customers: the digital age is accelerating, IT needs more horsepower to keep up, and there is a lot at stake.
Consider this: less than 10 years ago, some of the largest mainframes scaled up to half a terabyte (TB) of main memory. What if I were to tell you that these latest generation UCS blade servers will scale to 3TB? Sound like a lot? It is. And that’s just the two-processor version. Connect two UCS B260 M4 blades with an expansion connector and they become a UCS B460 M4, a four socket server that will scale to 6TB. Putting that into perspective: a spiffy new laptop might ship today with 8GB of memory. Multiply that by 750 and you have 6TB.
Not too long ago, all the content Wikipedia would fit in this type of footprint (in 2010 it was just under 6TB with media.) Here is a fun illustration of what this scale of data would look like on paper (just the ~10GB of text, not the images.) Now remember, we’re not talking about fitting all that data on the local disks of the server – we’re talking about fitting it in main memory. This is becoming crucially important in the field of data analytics, where “in-memory” is the key to speed and competitiveness. Applications like SAP HANA are at the forefront of this trend. Today, at Intel’s launch event in San Francisco, Dan Morales (Vice President of Enabling Functions at eBay) joined us to talk about how they’re betting on this type of analytic technology to help them make the eBay Marketplace work better for buyers and sellers (and eBay shareholders.) I’ll post a video clip of that soon; his description of the challenges and opportunities, at eBay scale, is worth a watch.
We’ve talked about memory scaling, and Bruno Messina has a nice post that talks more about the scalability on these systems and UCS at large. But dominating performance is the name of the game: behemoth processing performance is what we look for at this end of the server spectrum and Intel has not disappointed on this round of new technology. The next generation of the Intel Xeon E7 family packs up to 15 cores per processor and delivers an average 2x performance increase compared to previous generation products. Performance will be even higher on specific workloads, for example up to 3X on database and even higher for virtualization. Cisco’s implementation of this technology has once again set the standard for system performance. In today’s launch, Intel cited Cisco with 6 industry-leading results on key workloads. As of this posting, the closest to come to that achievement that was Dell with 4. HP ProLiant posted 1. So hats off, once again, to the engineering team in Cisco’s Computing Systems Product Group. Girish Kulkarni has a great summary of the performance news here.
Our collaboration with Intel is one of the best technology combinations in the industry today. Consider what we both bring to the party. Intel: innovation in processor technology that drives Moore’s Law. Cisco: innovation in connecting things across the data center and around the world. UCS is an outcome of two blue-chip tech powerhouses investing in real innovation and the results have changed the industry.
In 1991, Stewart Alsop famously wrote: “I predict that the last mainframe will be unplugged on 15 March 1996.” He just as famously had to eat his words. He munched on those twelve years ago, and while mainframes and RISC-based systems remain, there is an inexorable trend as the heaviest analytic workloads continue to shift to the type of scale-up x86-based systems we’re talking about today. It only makes sense. So while this will garner me plenty of comments from the architectural purists out there, I say “go ahead and plug a mainframe back in.” It will fit right in your UCS B-Series blade chassis…
Tags: Big Data, Blade Servers, Cisco Data Center, Cisco Data Center strategy, Cisco Servers, Cisco UCS, Cisco Unified Computing System, SAP. HANA, unified computing
First, the Internet of Things:
Consider these impressive stats shared in a keynote from Cisco’s CTO and CSO Padmasree Warrior last week at Cisco Live, London:
- 50 Billion “things” including trees, vehicles, traffic signals, devices and what not will be connected together by 2020 (vs. 1000 devices connected in 1984)
- 2012 created more information than the past 5000 years combined!
- 2/3rd of the world’s mobile data will be video by 2015.
These statistics may seem a bit surprising, but the fact is, they cannot be ignored by CIOs and others chartered with the responsibility of managing IT infrastructure.
Impact on Enterprise and SP Infrastructure strategies
Further, these trends are not silo’d and are certainly not happening in a vacuum. For example, Bring-your-Own Device (BYOD) and the exponential growth of video endpoints, may be happening in the “access”, but they are causing a ripple effect upstream in the data center and cloud environments, and coupled with new application requirements, are triggering CIOs across larger Enterprise and Service Providers to rapidly evolve their IT infrastructure strategies.
It is much the same with cloud infrastructure strategies. Even as Enterprises have aggressively adopted the journey to Private Cloud, their preference for hybrid clouds, where they can enjoy the “best of both worlds” – public and private have grown as well. However, the move to hybrid clouds has been somewhat hampered by challenges as outlined in my previous blog: Lowering barriers to hybrid cloud adoption – challenges and opportunities.
The Fabric approach
To address many of these issues, Cisco has long advocated the concept of a holistic data center fabric, heart of its Unified Data Center philosophy. The fundamental premise of breaking silos, and bringing together disparate technology silos across network, compute and storage is what makes this so compelling. At the heart of it, is the Cisco Unified Fabric, serving as the glue.
As we continue to evolve this fabric, we’re making three industry-leading announcements today that help make the fabric more scalable, extensible and open.
Let’s talk about SCALING the fabric first:
- Industry’s highest density L2/L3 10G/40G switch: Building upon our previous announcement of redefining fabric scale, this time we introduces a New Nexus 6000 family with two form factors – 6004 and 6001. We expect these switches to be positioned to meet increasing bandwidth demands, for spine/leaf architectures, and for 40G aggregation in fixed switching deployments. We expect the Nexus 6000 to be complementary to the Nexus 5500 and Nexus 7000 series deployments, and is not to be confused with the Catalyst 6500 or Nexus fabric interconnects.
The Nexus 6000 is built with Cisco’s custom silicon, and 1 micro-second port to port latency. It has forward propagated some of the architectural successes of the Nexus 3548, the industry’s lowest latency switch that we introduced last year. Clearly, as in the past, Cisco’s ASICs have differentiated themselves against the lowest common denominator approach of the merchant silicon, by delivering both better performance as well as greater value due to the tight integration with the software stack.
The Nexus 5500 incidentally gets 40G expansion modules, and is accompanied by a brand new Fabric Extender – the 2248PQ, which comes with 40G uplinks as well. All of these, along with the 10G server interfaces, help pair the 10G server access with 40G server aggregation.
Also as part of the first step in making the physical Nexus switches services ready in the data center, a new Network Analysis Module (NAM) on the Nexus 7000 also brings in performance analytics, application visibility and network intelligence. This is the first services module with others to follow, and brings in parity with the new vNAM functionality as well.
- Industry’s simplest hybrid cloud solution: Over the last few years, we have introduced several technologies that help build fabric extensibility – the Fabric Extender or FEX solution is very popular extending the fabric to the server/VM, as are some of the Data Center Interconnect technologies like Overlay Transport Virtualization (OTV) or Location ID Separation Protocol (LISP), among others. Obviously each have their benefits.
The Nexus 1000V Intercloud takes these to the next level by allowing the data center fabric to be extended to provider cloud environments in a secure, transparent manner, while preserving L4-7 services and policies. This is meant to help lower the barriers for hybrid cloud deployments and is designed to be a multi-hypervisor, multi-cloud solution. It is expected to ship in the summer timeframe, by 1H CY13.
This video does a good job of explaining the concepts of the Intercloud solution:
Read More »
Tags: Andre Kindness, Ayman Sayed, Cisco Cloud strategy, Cisco Controller, Cisco Data Center strategy, Cisco ONE, Cisco Open Network Environment, David Ward, David Yen, GDIT, Greg Sanchez, Internet of Things (IoT), Kerby Lyons, Matt Davy, NAM, Nexus 1000V InterCloud, Nexus 6000, onePK, OpenFlow, padmasree warrior, Shashi Kiran, SunGard Availability Services, Unified Data Center, Unified Fabric