Cisco Blogs


Cisco Blog > Data Center and Cloud

VDI “The Missing Questions” #7: How memory bus speed affects scale

March 13, 2013 at 11:59 am PST

This was the test I most eagerly anticipated because of the lack of information on the web regarding running a Xeon-based system at a reduced memory speed. Here I am at Cisco, the company that produces one of the only blades in the industry capable of supporting both the top bin E5-2690 processor and 24 DIMMs (HP and Dell can’t say the same), yet I didn’t know the performance impact for using all 24 DIMM slots. Sure, technically I could tell you that the E5-26xx memory bus runs at 1600MHz at two DIMMs per channel (16 DIMMs) and a slower speed at three DIMMs per channel (24 DIMMs), but how does a change in MHz on a memory bus affect the entire system? Keep reading to find out.

Speaking of memory, don’t forget that this blog is just one in a series of blogs covering VDI:

The situation. As you can see in the 2-socket block diagram below, the E5-2600 family of processors has four memory channels and supports three DIMMs per channel. For a 2-socket blade, that’s 24 DIMMs. That’s a lot of DIMMs. If you populate either 8 or 16 DIMMs (1 or 2 DIMMs per channel), the memory bus runs at the full 1600MHz (when using the appropriately rated DIMMs). But when you add a third DIMM to each channel (for 24 DIMMs), the bus slows down. When we performed this testing, going from 16 to 24 DIMMs slowed the entire memory bus to 1066MHz, so that’s what you’ll see in the results. Cisco has since qualified running the memory bus at 1333MHz in UCSM maintenance releases 2.0(5a) and 2.1(1b), so running updated UCSM firmware should yield even better results than we saw in our testing.

 

As we’ve done in all of our tests, we looked at two different blades with two very different processors. Let’s start with the results for the E5-2665 processor. The following graph summarizes the results from four different test runs. Let’s focus on the blue lines. We tested 1vCPU virtual desktops with the memory bus running at 1600MHz (the solid blue line) and 1066MHz (the dotted blue line). The test at 1600MHz achieved greater density, but only 4% greater density. That is effectively negligible considering that the load is random in these tests. LoginVSI is designed to randomize the load.

Read More »

Tags: , , , , , , ,

Cisco UCS SmartPlay Offers for Microsoft Exchange, SharePoint & SQL Server

March 1, 2013 at 12:39 pm PST

As you look at your upcoming Microsoft Exchange, SharePoint, or SQL Server projects keep in mind Cisco’s Unified Computing System (UCS) SmartPlay program. The program offers attractive pricing on select Cisco UCS blade and rack server bundles. Our Cisco UCS servers provide an optimal I.T. platform for these key Microsoft workloads helping you to deliver exceptional performance & virtual machine densities, as well as scalability to your organization. Read More »

Tags: , , , , , , ,

Cisco and Intel Extend Relationship into Big Data

February 26, 2013 at 11:25 am PST

Today Paul Perez, Vice President and CTO of Cisco’s Data Center Group joined on stage downtown San Francisco Boyd A. Davis, Intel Architecture Group Vice President and GM, Data Center Software Division  to announce a proposed  extension of the alliance between Cisco and Intel into Big Data .

Over the past months, our readers had the opportunity to appreciate the growing investment of Cisco in this market frequently articulated by our experts Raghunath Nambiar  and Jacob Rapp  through blog postings and speaking at industry events.

Cisco and Intel have worked together for years to deliver enterprise solutions that improve performance and enable organizations to deliver new services. As we have stated several times recently , Intel has been a critical partner and significant contributor to the phenomenal success of the Cisco UCS. So it will not come as a surprise to anybody that Cisco and Intel are looking to  partner again to offer you a leading Big Data solution.

In this video, Cisco Paul Perez and Intel Boyd Davis explained how Cisco will support the Intel distribution of Apache Hadoop on UCS, and how both companies intend to collaborate to address the growing Big Data needs of our joint customers.

Please read the Intel announcement and stay tuned for a more detailed and technical  blog by Raghunath Nambiar.

Tags: , , , , ,

VDI “The Missing Questions” #4: How much SPECint is enough

February 25, 2013 at 6:35 am PST

In the first few posts in this series, we have hopefully shown that not all cores are created equal and that not all GHz are created equal. This generates challenges when comparing two CPUs within a processor family and even greater challenges when comparing CPUs from different processor families. If you read a blog or a study that showed 175 desktops on a blade with dual E7-2870 processor, how many desktops can you expect from the E7-2803 processor? Or an E5 processor? Our assertion is that SPECint is a reasonable metric for predicting VDI density, and in this blog I intend to show you how much SPECint is enough [for the workload we tested].

You are here. As a quick recap, this is a series of blogs covering the topic of VDI, and here are the posts in this series:

Addition and subtraction versus multiplication and division. Shawn already explained the concept of SPEC in question 2, so I won’t repeat it. You’ve probably noticed that Shawn talked about “blended” SPEC whereas I’m covering SPECint (integer). As it turns out, the majority of task workers really exercise the integer portion of a processor rather than the floating point portion of a processor. Therefore, I’ll focus on SPECint in this post. If you know more about your users’ workload, you can skew your emphasis more or less towards SPECint or SPECfp and create your own blend.

The method to the madness. Let me take you on a short mathematical journey using the figure below. Starting at the top, we know each E5-2665 processor has a SPECint of 305. It doesn’t matter how many cores it has or how fast those cores are clocked. It has a SPECint score of 305 (as compared to 187.5 for the E5-2643 processor). Continuing down the figure below, each blade we tested had two processors, so the E5-2665 based blade has a SPECint of 2 x 305… or 610. The E5-2665 blade has a much higher SPECint of 610 than the E5-2643 blade with just 375. And it produced many more desktops as you can see from the graph embedded in the figure (the graph should look familiar to you from the first “question” in this series).

And now comes the simple math to get the SPECint requirement for each virtual desktop in each test system:

Read More »

Tags: , , , , , , ,

VCE: Converging on a better data center

Cisco’s Unified Data Center strategy is rooted in the idea that customers shouldn’t be put in the position of DIY technology integration.  It’s just an unfair ask given everything that IT and LOB leaders contend with above and beyond the infrastructure.   As technology has evolved, the component parts of the data center are decreasingly the source of complexity.   It’s the connections between them, creating that sum of the parts that can actually run applications, that’s the hardest part.   Eliminating this complexity has been Cisco’s guiding star in the data center, building systems that help customers focus on what matters most to them:  applications and IT services, not infrastructure.

VCE, Cisco’s joint venture with EMC, VMware and Intel, is a critical expression of this vision for fabric based infrastructure and converged solutions.  Today marks a major milestone for VCE with the broadest solutions announcement since the launch of Vblock Systems, which has become widely recognized as the gold standard of converged infrastructure.

These new offerings extend the proven value of Vblock: converged, pre-engineered infrastructure that slashes deployment time and ongoing management burden, into a new set of market segments and key workloads.

The team at VCE have done a great job detailing this out; I see the key components being brought forward today as:

  • Taking Vblock Systems to new customer segments and use cases:  System 200 is designed for mid-size data centers and service provider-managed customer premise (CPE) scenarios.  System 100 extends to remote office/branch office environments.  Combining these new Vblocks with applications like  Microsoft Exchange & Sharepoint, VDI, and Cisco Unified Commuications will continue the push to eliminate DIY solution assembly for customers.
  • VCE Specialized Systems: a series of systems optimized for key workloads, starting with SAP HANA.  Certification for Vblock here is an exciting new opportunity for customers to quickly adopt this hot new analytic technology
  • VCE Vision Intelligent Operations which brings intelligent discovery and single lens management to VBlock Systems.  This takes a similar API driven approach found at the core of UCS to enable orchestration of the converged system.   This is a critical component for cloud builders.

VCE’s launch is a major milestone in their evolution, but the way each Vblock system is built, maintained and supported remains constant and predictable. Customers can continue to rely on the same comprehensive physical and logical build done in the factory, single point support and the IT agility and economic benefits these create.

Customers have spoken and this is being reflected in the results, with 1,000 Vblock Systems shipped, demand on a billion dollar run rate and recognition as the market leader in converged systems.

Congratulations to the VCE team as they continue to make it easier for customers to concentrate on the business and not on the infrastructure!

vblockvideo

Tags: , , , , , , ,