Avatar

Back in January we launched a blogging series (with the above title) exploring the various server design parameters that impact VDI performance and scalability.  Led by Shawn Kaiser, Doron Chosnek and Jason Marchesano, we’ve been exploring the impact of things like CPU core count, core speed, vCPU, SPECInt, memory density, IOPS and more.  If you’re new to VDI and trying to avoid the pitfalls that exist between proof of concept and large scale production, this has hopefully been an insightful journey which has yielded some practical design guidance that will make your implementation that much more successful.

Here’s a snapshot of the ground
we covered along the way:whitpaper

  1. Introduction
  2. Core Count vs. Core Speed
  3. Core Speed Scaling (Burst)
  4. Realistic Virtual Desktop limits
  5. How much SPECint is enough?
  6. How does 1vCPU scale compared to 2vCPU’s?
  7. What do you really gain from a 2vCPU virtual desktop?
  8. How memory bus speed affects scale
  9. How does memory density affect VDI scalability?
  10. How many storage IOPs?

What?  There’s a Whitepaper? (who doesn’t like free stuff?)

If you’re just catching up with us, and want a nice, complete, whitepaper-ized version of the series, this is your lucky day.  You can get download the paper here.

VDI No-Holds-Barred Webinar!

Finally, last month, as part of the series we also offered a webinar on BrightTalk, where our panel of experts walked us through these design considerations exposed in the series, and fielded audience questions.  It was one of those high quality interactions that hopefully provides great on-going usefulness to those who catch the replay.

If you missed the event, you can watch it here.  The guys fielded a lot of great Q&A from our community, and in fact there were a few lingering questions we didn’t have time to address during the event.  They’ve captured these for me (including their answers) provided below.

What’s Next?  Got a Question?

I hope the journey was as impactful for you as it was for me – I should point out that the guys are considering what to attack as part of the next phase of their lab testing.  I would highly encourage you to provide your input (or questions) be emailing us at 9questions@cisco.com  Let us know what’s on your mind, where we should take the test effort to better align with the implementation scenarios you’re facing, etc.  Thanks!

 

Q&A From Our Web Event:

1)      I have used Liquidware labs VDI assessment tool to help me understand how to accurately size my customer’s virtual desktops.  Should I not be using tools like these?

Answer:  These tools do a great job of looking at utilization on existing environments.  The potential issue is that most of these tools only aggregate MHz utilization, there is no concept of SPEC conversion to properly map to newer processors.  The other thing that we have seen with using this raw data and trying to fit it all in a particular blade solution is that there is usually no “overhead” of the VM taken into consideration.  So sometimes it looks like you can have a 20 Desktop to a single physical core on a server and that’s just too aggressive when you look at typical vCPU oversubscription, etc.  The bottom line is that these types of tools are great initial sanity checkers to validate the possibility of VDI consolidation.  If you are involved in these types of assessments and are working on a Cisco UCS solution, we have tools that can assist in importing this type of data and helping you make more pointed recommendation’s as well.  Just email 9questions@cisco.com and we can discuss!

 

2)      Do you find a performance increase/higher density hosts by scheduling similar vCPU count VM’s on the same hosts?

Answer:  We did not test the mixing of 1vCPU and 2vCPU workloads to technically qualify an answer to see that impacts this would have – but this is a great idea and we will definitely consider this in our phase 2 testing.

 

3)      Did you find giving more RAM to a VM caused the performance figures to decrease? E.g. 100 VMs at 4GM/VM compared to 100VMs using 1.5GB.VM

Answer:  Since our testing was a static memory allocation of 1.5GB, we do not have the data to answer this particular question – again, another great idea to possibly include in our phase 2 testing.

 

4)      Hi. A bit unclear on the last slide.  150 simultaneous desktops produce 39000 IOPs.  Is this assuming physical desktops and figures were based on IOPs on each physical desktop.  If so, I don’t see how the IOPs figure is relevant as it only on local disk, not SAN.  Think I misunderstood the last slide!!

Answer:  The 39000 IOPs was measured by both vCenter and the storage array controller as the total number of IOPs to boot 150 virtual desktops. No testing was done with physical desktops.

 

5)      Loved the Cisco blogs regarding vCPU, SPEC, memory speed, CPU performance.  Is there a similar piece of research that has been done regarding server VM performance rather than VDI?

Answer:  Not *yet….  Hint hint.  🙂

 

6)      Are there unique considerations for plant floor VDI deployments?  The loads on those systems are typically higher on a continuous basis.

Answer:  Specific use cases for VDI with different workloads definitely do exist and you should definitely size based on those requirements.  If you feel your individual application requirements are not close to one of the pre-configured LoginVSI tests, the LoginVSI tool does allow for custom workload configurations where you can have it simulate working against your own apps.