Cisco Blogs


Cisco Blog > Data Center and Cloud

VDI “The Missing Questions” #7: How memory bus speed affects scale

March 13, 2013 at 11:59 am PST

This was the test I most eagerly anticipated because of the lack of information on the web regarding running a Xeon-based system at a reduced memory speed. Here I am at Cisco, the company that produces one of the only blades in the industry capable of supporting both the top bin E5-2690 processor and 24 DIMMs (HP and Dell can’t say the same), yet I didn’t know the performance impact for using all 24 DIMM slots. Sure, technically I could tell you that the E5-26xx memory bus runs at 1600MHz at two DIMMs per channel (16 DIMMs) and a slower speed at three DIMMs per channel (24 DIMMs), but how does a change in MHz on a memory bus affect the entire system? Keep reading to find out.

Speaking of memory, don’t forget that this blog is just one in a series of blogs covering VDI:

The situation. As you can see in the 2-socket block diagram below, the E5-2600 family of processors has four memory channels and supports three DIMMs per channel. For a 2-socket blade, that’s 24 DIMMs. That’s a lot of DIMMs. If you populate either 8 or 16 DIMMs (1 or 2 DIMMs per channel), the memory bus runs at the full 1600MHz (when using the appropriately rated DIMMs). But when you add a third DIMM to each channel (for 24 DIMMs), the bus slows down. When we performed this testing, going from 16 to 24 DIMMs slowed the entire memory bus to 1066MHz, so that’s what you’ll see in the results. Cisco has since qualified running the memory bus at 1333MHz in UCSM maintenance releases 2.0(5a) and 2.1(1b), so running updated UCSM firmware should yield even better results than we saw in our testing.

 

As we’ve done in all of our tests, we looked at two different blades with two very different processors. Let’s start with the results for the E5-2665 processor. The following graph summarizes the results from four different test runs. Let’s focus on the blue lines. We tested 1vCPU virtual desktops with the memory bus running at 1600MHz (the solid blue line) and 1066MHz (the dotted blue line). The test at 1600MHz achieved greater density, but only 4% greater density. That is effectively negligible considering that the load is random in these tests. LoginVSI is designed to randomize the load.

Read More »

Tags: , , , , , , ,

Social Media Is Like Gelato In A Cone #CiscoSMT #SocialSavvy

March 12, 2013 at 2:26 pm PST

Last week I spoke at an event and the definition of social media came up. Some people refer to social networking tools when they speak of social media while others refer to the notion of engagement and content on the web. I’m more of a “gelato in a cone” kinda gal. I view social media as engagement and content (gelato) that lives in some kind of an “online container”, such as a social networking site or another web platform (cone). I’m looking for both. I would even argue that customer experiences, whether social or not, could and should be connected to optimize their journey. For example, social content can live on your web site and your social networking sites and conversations can be prominently featured at your events.

Building on the “gelato in a cone” interpretation of social media, we (@CiscoSocial) will be hosting a social media event for the savvy marketer in San Jose on April 18 and 19. Anyone and everyone is welcome to attend this free event as we bring together some super bright practitioners for 2 days of live chats and presentations. The practitioners that are lending their expertise and time to our event come from Twitter, LinkedIn, Kaiser Permanente, Walmart, Adobe, SAP, Intel, VMware, Citrix, ABC, eBay, Salesforce.com, MindShare, Engauge, Percolate, BuzzFeed, Performics, Digby, Blinq Media, Cisco, and more.

You may attend in person or via webcast, just please register ahead of time.

Register for the in-person event: http://cs.co/SMevent.

Register for the webcast: http://cs.co/SMEventWebcast.

Hash tags: #CiscoSMT, #SocialSavvy

Ping us at @CiscoSocial

We have a wide range of topics lined up for you, check out some details here:  Read More »

Tags: , , , , , , , , , , , , , , , , , , , ,

VDI “The Missing Questions” #4: How much SPECint is enough

February 25, 2013 at 6:35 am PST

In the first few posts in this series, we have hopefully shown that not all cores are created equal and that not all GHz are created equal. This generates challenges when comparing two CPUs within a processor family and even greater challenges when comparing CPUs from different processor families. If you read a blog or a study that showed 175 desktops on a blade with dual E7-2870 processor, how many desktops can you expect from the E7-2803 processor? Or an E5 processor? Our assertion is that SPECint is a reasonable metric for predicting VDI density, and in this blog I intend to show you how much SPECint is enough [for the workload we tested].

You are here. As a quick recap, this is a series of blogs covering the topic of VDI, and here are the posts in this series:

Addition and subtraction versus multiplication and division. Shawn already explained the concept of SPEC in question 2, so I won’t repeat it. You’ve probably noticed that Shawn talked about “blended” SPEC whereas I’m covering SPECint (integer). As it turns out, the majority of task workers really exercise the integer portion of a processor rather than the floating point portion of a processor. Therefore, I’ll focus on SPECint in this post. If you know more about your users’ workload, you can skew your emphasis more or less towards SPECint or SPECfp and create your own blend.

The method to the madness. Let me take you on a short mathematical journey using the figure below. Starting at the top, we know each E5-2665 processor has a SPECint of 305. It doesn’t matter how many cores it has or how fast those cores are clocked. It has a SPECint score of 305 (as compared to 187.5 for the E5-2643 processor). Continuing down the figure below, each blade we tested had two processors, so the E5-2665 based blade has a SPECint of 2 x 305… or 610. The E5-2665 blade has a much higher SPECint of 610 than the E5-2643 blade with just 375. And it produced many more desktops as you can see from the graph embedded in the figure (the graph should look familiar to you from the first “question” in this series).

And now comes the simple math to get the SPECint requirement for each virtual desktop in each test system:

Read More »

Tags: , , , , , , ,

Cisco and VMware Teaming-Up to Accelerate VDI Uptake in 2013 – Join us at VMware Partner Exchange to Find Out More

Are you a Cisco/VMware partner going to VMware Partner Exchange (PEX)?  In Vegas!  Next week!  If so, and you’re focused on growing your VDI practice, there’s some great content for you to take in while there.  Before I get into PEX, let me remind you about our on-going blog series, “VDI -  The Questions You Didn’t Ask (But Really Should)”.  We’re up to Question 4 (coming soon), and if you’re looking for some great insights into the mystique of processor selection and impact on VDI performance/density/etc, this is the series for you!  Now onto PEX…

Improving the ROI of VDI within small and medium-sized organizations

Next week, we’ll be updating our VMware partner community on new solutions that offer an accelerated path to growing their VDI practice, especially for smaller deployments, as found in small and medium sized businesses (or pilot / proof-of-concept environments), where the up-front CAPEX hurdle is often too much of a barrier to make VDI cost effective.

New Ecosystem Solutions Portfolio

In tandem with VMware, we’ll be announcing a new portfolio of solutions built with ecosystem partners helping to deliver better VDI price-to-performance ratios, greater operational simplicity, and uncompromised user experience, built on Cisco UCS with VMware Horizon View.

Delivering the Tools to Make Our Channel Partners Successful in 2013

Next week we have good news for Cisco/VMware channel partners who want to grow their VDI practice, and deliver unprecedented value for their customers.  Join us at PEX to learn about how Cisco and VMware are accelerating our partners’ path to success in 2013

So Are You Ready for PEX?  Here are some key activities you don’t want to miss:

Cisco Partner Bootcamp (#SPO2400)

Monday, February 25, 8:30 a.m.-5:30 p.m.

The Cisco Boot Camp is dedicated to educating and enabling partners to sell and deploy Cisco solutions successfully. Here’s the best part J VDI is up first at 8:30am!  I’m pretty sure we’ll have food and non-alcoholic beverages (c’mon you’re in Vegas, I really don’t think that will pose a problem) to make it worth your while.  You will:

  • Expand your technical depth and understanding of key Cisco solutions for VDI, Cloud, Branch/Remote-Office IT, Unified Management and more
  • Gain insights to identify your customer needs effectively and acquire new ones
  • Find out how to expand business by cross-selling Cisco solutions and services
  • Network with other partners, Cisco experts, and executives
  • Come away with go-to-market selling strategies that enable you to accelerate your business

Cisco’s Breakout Session (#SPO2421): Cisco Unified Data Center -- From Server to Network
Wednesday, February 27, 12:30-1:30 p.m.
Presenter: Satinder Sethi, VP, Server Product Management and Data Center Solutions, Cisco

Attend the Cisco breakout to understand why today’s data center architecture must support a highly mobile workforce, proliferation of devices, and data-driven business models and be capable of transparently incorporating cloud applications and services. Satinder Sethi will present these diverse requirements and discuss how the Cisco Unified Data Center platform addresses these challenges.

You will learn about the Cisco Unified Data Center architecture, which combines compute, storage, network, and management into a platform designed to automate IT as a service across physical and virtual environments, resulting in increased budget efficiency, more agile business responsiveness, and simplified IT operations.

Demo’s!  Stop By Booth #1015

Eight solid demos await you at our PEX booth this year, including VDI with VMware Horizon View and our UCS Storage Accelerator (using Fusion-io), Unified Computing System (UCS), Cisco Office in a Box with UCS-Express, Cisco Intelligent Automation for Cloud, Cisco Cloupia, and Cisco Nexus 1000v to name a few.  Experts on hand will answer any/all questions!

It will be a busy week – mark your calendars with the activities above, and see you there!

Tags: , , , , ,

VDI “The Missing Questions” #3: Realistic Virtual Desktop Limits

So this is the Million Dollar Question, right? You, along with the executives sponsoring your particular VDI project wanna know: How many desktops can I run on that blade? It’s funny how such an “it depends” question becomes a benchmark for various vendors blades, including said vendor here.

Well, for the purpose of this discussion series, the goal here is not to reach some maximum number by spending hours in the lab tweaking various knobs and dials of the underlying infrastructure. The goal of this overall series is to see what happens to the number of sessions as we change various aspects of the compute: CPU Speed/Cores, Memory Speed and capacity. Our series posts are as follows:

 

You are Invited!  If you’ve been enjoying our blog series, please join us for a free webinar discussing the VDI Missing Questions, with Doron, Shawn and myself (Jason)!  Access the webinar here!

But for the purpose of this question, let’s look simply at the scaling numbers at the appropriate amount of RAM for the the VDI count we will achieve (e.g. no memory overcommit) and maximum allowed memory speed (1600MHz).

As Doron already revealed in question 1, we did find some maximum numbers in our test environment. Other than the customized Cisco ESX build on the hosts, and tuning our Windows 7 template per VMware’s View Optimization Guide for Windows 7, the VMware View 5.1.1 environment was a fairly default build out designed for simplicity of testing, not massive scale. We kept unlogged VMs in reserve like you would in the real world to facilitate the ability for users to login in quickly…yes that may affect some theoretical maximum number you could get out of the system, but again…not the goal.

And the overall test results look a little something like this:

E5-2643 Virtual Desktops

E5-2665 Virtual Desktops

1vCPU, 1600MHz

81

130

2vCPU, 1600MHz

54

93

 

As explained in Question 1, cores really do matter…but even then, surprisingly the two CPUs are neck and neck in the race until around 40 VM mark. Then the 2 vCPU desktops on the quad core CPU really take a turn for the worse:


Why?

Co-scheduling!

When a VM has two (or more) vCPUs, the hypervisor must find two (or more) physical cores to plant the VM on for execution within a fairly strict timeframe to keep that VM’s multiple vCPUs in sync.

MULTIPLE vCPU VMS ARE NOT FREE!

Multiple vCPUs create a constraint that takes time for the hypervisor to sort out every time it makes a scheduling decision, not to mention you simply have more cores allocated for hypervisor to schedule for the same number of sessions: DOUBLE that of the one vCPU VM. Only way to fix this issue is with more cores.

That said: the 2 vCPU VMs continue to scale consistently on the E5-2665 with its double core count to the E5-2643. At around the 85 session mark, the even the E5-2665 can no longer provide a consistent experience with 2vCPU VDI sessions running. I’ll stop here and jump off that soap box…we’ll dig more into the multiple vCPU virtual desktop configuration in a later question (hint hint hint)…

Now let’s take a look at the more traditional VDI desktop: the 1 vCPU VM:


With the quad-core E5-2643, performance holds strong until around the 60 session mark, then latency quickly builds as the 4000ms threshold is hit at 81 sessions. But look at the trooper that the E5-2665 is though! Follow its 1 vCPU scaling line in the chart and all those cores show a very consistent latency line up to around the 100 session mark, where then it becomes somewhat less consistent to the 4000ms VSImax of 130. 130 responsive systems on a single server! I remember when it was awesome to get 15 or so systems going on a dual socket box 10 or so years ago, and we are at 10x the quantity today!

Let’s say you want to impose harsher limits to your environment. You’ve got a pool of users that are a bit more sensitive to response time than others (like your executive sponsors!). 4000ms response time may be too much and you want to halve that to 2000ms. According to our test scenario, the E5-2665 can STILL sustain around 100 sessions before the scaling becomes a bit more erratic in this workload simulation.

021813_1657_VDITheMissi3.png

Logic would suggest half the response time may mean half the sessions, but that simply isn’t the case as shown here. We reach Point of Chaos (POC!) where there is very inconsistent response times and behaviors as we continue to add sessions. In other words: It does not take many more desktop sessions in a well running environment that is close to the “compute cliff” before the latency doubles and your end users are not happy. But on the plus side, and assuming storage I/O latency isn’t an issue, our testing shows that you do not need to drop that many sessions from each individual server in your cluster to rapidly recover session response time as well.

So in conclusion, the E5-2643, with its high clock speed and lower core count, is best suited for smaller deployments of less than 80 desktops per blade. The E5-2665, with its moderate clock speed and higher core count, is best suited for larger deployments of greater than 100 desktops per blade.

 

Next up…what is the minimum amount of normalized CPU SPEC does a virtual desktop need?

 

Tags: , , , , , , ,