Our Common Platform Architecture (CPA) for Big Data has been gaining momentum as a viable platform for enterprise big data deployments. The newest addition to the portfolio is EMC’s new Pivotal HD™ that natively integrates Greenplum MPP database technology with Apache Hadoop enabling SQL applications and traditional business intelligence tools directly on Hadoop framework. Extending support for Pivotal HD on Cisco UCS, Satinder Sethi, Vice President at Cisco’s Datacenter Group said “Hadoop is becoming a critical part of enterprise data management portfolio that must co-exist and complement enterprise applications, EMC’s Pivotal HD is an important step towards that by enabling native SQL processing for Hadoop”.
Built up on our 3+ years of partnership with Greenplum database distribution and Hadoop distributions, the joint solution offers all the architectural benefits of the CPA including: Unified Fabric -- fully redundant active-active fabric for server clustering, Fabric Extender technology -- highly scalable and cost-effective connectivity, Unified Management -- holistic management of infrastructure through a single pane of glass using UCS manager, and High performance -- high speed fabric along with Cisco UCS C240 M3 Rack Servers powered by Intel® Xeon® E5-2600 series processors. Unique to this solution is the management integration and data integration capabilities between Pivotal HD based Big Data applications running on CPA and enterprise application running on Cisco UCS B-Series Blade Servers connected to enterprise SAN storage from EMC or enterprise application running on integrated solutions like Vblock.
The Cisco solution for Pivatol HD is offered as reference architecture and as Cisco UCS SmartPlay solution bundles that can be purchased by ordering a single part number: UCS-EZ-BD-HC -- rack level solution optimized for for low cost per terabyte and UCS-EZ-BD-HP -- rack level solution offers balance of compute power with IO bandwidth optimized for price/performance.
Over the past 2 months or so, I’ve been blogging on Cisco Domain TenSM, Cisco Service’s framework to guide you on your path to data center and cloud transformation. We are just over half way through the discussion on Cisco Domain Ten, so I thought it worthwhile, especially for anyone reading about this concept for the first time, to write a quick refresher and summary of the articles I’ve written so far. So forgive the brevity and please do dive into the links/URLs for more information if indeed you missed these articles first time. And if you’ve read every article -- thanks!
In the first few posts in this series, we have hopefully shown that not all cores are created equal and that not all GHz are created equal. This generates challenges when comparing two CPUs within a processor family and even greater challenges when comparing CPUs from different processor families. If you read a blog or a study that showed 175 desktops on a blade with dual E7-2870 processor, how many desktops can you expect from the E7-2803 processor? Or an E5 processor? Our assertion is that SPECint is a reasonable metric for predicting VDI density, and in this blog I intend to show you how much SPECint is enough [for the workload we tested].
You are here. As a quick recap, this is a series of blogs covering the topic of VDI, and here are the posts in this series:
Addition and subtraction versus multiplication and division. Shawn already explained the concept of SPEC in question 2, so I won’t repeat it. You’ve probably noticed that Shawn talked about “blended” SPEC whereas I’m covering SPECint (integer). As it turns out, the majority of task workers really exercise the integer portion of a processor rather than the floating point portion of a processor. Therefore, I’ll focus on SPECint in this post. If you know more about your users’ workload, you can skew your emphasis more or less towards SPECint or SPECfp and create your own blend.
The method to the madness. Let me take you on a short mathematical journey using the figure below. Starting at the top, we know each E5-2665 processor has a SPECint of 305. It doesn’t matter how many cores it has or how fast those cores are clocked. It has a SPECint score of 305 (as compared to 187.5 for the E5-2643 processor). Continuing down the figure below, each blade we tested had two processors, so the E5-2665 based blade has a SPECint of 2 x 305… or 610. The E5-2665 blade has a much higher SPECint of 610 than the E5-2643 blade with just 375. And it produced many more desktops as you can see from the graph embedded in the figure (the graph should look familiar to you from the first “question” in this series).
And now comes the simple math to get the SPECint requirement for each virtual desktop in each test system:
Boot Camp: Connect, Discover, Learn with Cisco Monday, February 25, 8:30 a.m.–5:30 p.m. Session ID: SPO2400
The Cisco Boot Camp is dedicated to educating and enabling partners to sell and deploy Cisco solutions successfully.
Breakout Session: Cisco Unified Data Center: From Server to Network
Wednesday, February 27, 12:30–1:30 p.m.
Speaker: Satinder Sethi, VP, Server Product Management and Data Center Solutions, Cisco
Demos: Cisco Booth 1015!
VDI: Cisco UCS with VMware View
Cisco Servers: Cisco Unified Computing System with VMware
Cisco Nexus 1000V Family
Cisco Unified Management
Branch Office Consolidation with Cisco E-Series Server
EMC VSPEX Proven Infrastructure
Also in Cisco Booth 1015, we’ll be shooting multiple episodes of Engineers Unplugged! Drop by to see some of the superstars of IT in full whiteboard action. Topics range from automation to virtualization to SDN. Send me a Tweet @CommsNinja if you’d like to participate!
In the last fiscal quarter Cisco UCS reached another milestone with 20,000 (87% Y/Y growth) customers. The (no longer) new data center paradigm of fabric based computing must be resulting in unique customer benefits, and hence the market traction. Gartner defines Fabric based computing as follows:
Fabric-based computing (FBC) is a modular form of computing in which a system can be aggregated from separate (or disaggregated) building-block modules connected over a fabric or switched backplane. Fabric-based infrastructure (FBI) differs from FBC by enabling existing technology elements to be grouped and packaged in a fabric-enabled environment, while the technology elements of an FBC solution will be designed solely around the fabric implementation model.
I will dive deeper into why customers experience benefits with the Cisco Unified Computing System. So lets start with the term “Fabric”. A Lippis report helps us understand the data center fabric. In this tech target article by Michael Brandenburg we get some more background.
Legacy three-tiered data center architecture was designed to service the heavy north-south traffic of client-server applications, while enabling network administrators to manage the flow of traffic. Engineers adopted spanning tree protocol (STP) in these architectures to optimize the path from the client to server and allow for link redundancy. STP worked well to support client-server applications and its traffic flows, but proved inefficient for server-to-server or east-west communications associated with distributed application architecture.
…Server virtualization compounds the problem with spanning tree and the three-tiered architecture.
… data center fabric, a network where traffic from any port can reach any other node with as few latency-inducing hops as possible.
This is eye opening for those of us who live in the server and application world. Bottom line – the data center fabric will result in fewer hops and lower latency for servers communicating with each other in the data center.
So how is this achieved within the Cisco Unified Computing System? This is done with the Fabric Interconnect, which is the I/O hub and the very soul of the system. The Fabric interconnect consolidates three separate networks: LANs, SANs, and high-performance computing networks. The Fabric Interconnect provides consolidated access to both SAN storage and network attached storage (NAS) over the fabric. This means the Cisco Unified Computing System servers can access storage over Ethernet, Fibre Channel, Fibre Channel over Ethernet (FCoE), and iSCSI. It also lowers costs by reducing the number of network adapters, switches, and cables.
The Cisco UCS Manager, which is the embedded device manger software in the Fabric Interconnect, gives users the ability to slice and dice this big chunk of physical network capacity of the system into much smaller subunits, with the ability to do it flexibly and to change the decisions with software configuration. With Cisco UCS, IT organizations can now deliver dynamic network infrastructure or network services across all types of applications—from applications like Oracle, SAP, three tier J2EEE, and Microsoft to virtualized applications from VMware, Microsoft, and Citrix.
In his blog John McCool ,Cisco SVP and CTO, defines Fabric as “… a highly available, high performance shared infrastructure built with integrated, intelligent compute, storage and network nodes that can be rapidly and simply organized around the requirements of a given workload.” In part 2 of this blog I will detail the automation and management of the fabric-based compute nodes (upto 160) connected to a single pair of UCS Fabric Interconnects.