So this is the Million Dollar Question, right? You, along with the executives sponsoring your particular VDI project wanna know: How many desktops can I run on that blade? It’s funny how such an “it depends” question becomes a benchmark for various vendors blades, including said vendor here.
Well, for the purpose of this discussion series, the goal here is not to reach some maximum number by spending hours in the lab tweaking various knobs and dials of the underlying infrastructure. The goal of this overall series is to see what happens to the number of sessions as we change various aspects of the compute: CPU Speed/Cores, Memory Speed and capacity. Our series posts are as follows:
You are Invited! If you’ve been enjoying our blog series, please join us for a free webinar discussing the VDI Missing Questions, with Doron, Shawn and myself (Jason)! Access the webinar here!
But for the purpose of this question, let’s look simply at the scaling numbers at the appropriate amount of RAM for the the VDI count we will achieve (e.g. no memory overcommit) and maximum allowed memory speed (1600MHz).
As Doron already revealed in question 1, we did find some maximum numbers in our test environment. Other than the customized Cisco ESX build on the hosts, and tuning our Windows 7 template per VMware’s View Optimization Guide for Windows 7, the VMware View 5.1.1 environment was a fairly default build out designed for simplicity of testing, not massive scale. We kept unlogged VMs in reserve like you would in the real world to facilitate the ability for users to login in quickly…yes that may affect some theoretical maximum number you could get out of the system, but again…not the goal.
And the overall test results look a little something like this:
E5-2643 Virtual Desktops
E5-2665 Virtual Desktops
As explained in Question 1, cores really do matter…but even then, surprisingly the two CPUs are neck and neck in the race until around 40 VM mark. Then the 2 vCPU desktops on the quad core CPU really take a turn for the worse:
When a VM has two (or more) vCPUs, the hypervisor must find two (or more) physical cores to plant the VM on for execution within a fairly strict timeframe to keep that VM’s multiple vCPUs in sync.
MULTIPLE vCPU VMS ARE NOT FREE!
Multiple vCPUs create a constraint that takes time for the hypervisor to sort out every time it makes a scheduling decision, not to mention you simply have more cores allocated for hypervisor to schedule for the same number of sessions: DOUBLE that of the one vCPU VM. Only way to fix this issue is with more cores.
That said: the 2 vCPU VMs continue to scale consistently on the E5-2665 with its double core count to the E5-2643. At around the 85 session mark, the even the E5-2665 can no longer provide a consistent experience with 2vCPU VDI sessions running. I’ll stop here and jump off that soap box…we’ll dig more into the multiple vCPU virtual desktop configuration in a later question (hint hint hint)…
Now let’s take a look at the more traditional VDI desktop: the 1 vCPU VM:
With the quad-core E5-2643, performance holds strong until around the 60 session mark, then latency quickly builds as the 4000ms threshold is hit at 81 sessions. But look at the trooper that the E5-2665 is though! Follow its 1 vCPU scaling line in the chart and all those cores show a very consistent latency line up to around the 100 session mark, where then it becomes somewhat less consistent to the 4000ms VSImax of 130. 130 responsive systems on a single server! I remember when it was awesome to get 15 or so systems going on a dual socket box 10 or so years ago, and we are at 10x the quantity today!
Let’s say you want to impose harsher limits to your environment. You’ve got a pool of users that are a bit more sensitive to response time than others (like your executive sponsors!). 4000ms response time may be too much and you want to halve that to 2000ms. According to our test scenario, the E5-2665 can STILL sustain around 100 sessions before the scaling becomes a bit more erratic in this workload simulation.
Logic would suggest half the response time may mean half the sessions, but that simply isn’t the case as shown here. We reach Point of Chaos (POC!) where there is very inconsistent response times and behaviors as we continue to add sessions. In other words: It does not take many more desktop sessions in a well running environment that is close to the “compute cliff” before the latency doubles and your end users are not happy. But on the plus side, and assuming storage I/O latency isn’t an issue, our testing shows that you do not need to drop that many sessions from each individual server in your cluster to rapidly recover session response time as well.
So in conclusion, the E5-2643, with its high clock speed and lower core count, is best suited for smaller deployments of less than 80 desktops per blade. The E5-2665, with its moderate clock speed and higher core count, is best suited for larger deployments of greater than 100 desktops per blade.
Next up…what is the minimum amount of normalized CPU SPEC does a virtual desktop need?
Tags: citrix, cpu, UCS, vdi, virtual desktop, virtualization, VMware, vxi
The saying “you’ve gotta give credit where credit is due” is exceptionally literal for Liberty University in Lynchburg, Virginia. Recently becoming recognized as one of the most successful academic institutions in the country, Liberty University can thank its ability to support the huge influx of students, faculty, and staff to the updated technology infrastructure of their data center.
With growth comes the need to accommodate the large numbers of people and resources – including IT support. The current IT systems were outdated and obstructing the potential for online expansion at the university. By implementing the Cisco® Unified Computing System™ (UCS), based on Intel® Xeon® Processors, Liberty’s network became more flexible, scalable, and reliable. The virtualized and consolidated infrastructure is able to support the multitude of users, which is ideal due to the 85,000 students accessing the network from 95 countries around the world.
Cisco UCS has significantly decreased downtime for both students and staff, resulting in the ability to focus on education, not IT issues. Higher satisfaction, growing enrollment rates, and a unified network make for a promising future for Liberty University and its students.
Read the full article here.
Tags: higher education, unified computing, virtualization
Customers have often said to me, “Joann, we have virtualization all over the place. That’s cloud isn’t it?” My response is, “Well not really, that is not a cloud, but you can get to cloud!” Then there is a brief uncomfortable silence, which I resolve with an action provoking explanation that I will now share with you.
Here’s why that isn’t truly a cloud. What these customers often have is server provisioning that automates the process of standing up new virtual servers while the storage, network, and application layers continue to be provisioned manually. The result is higher management costs that strain IT budgets, which are decreasing or flat to begin with. With this approach, businesses aren’t seeing the agility and flexibility they expected from cloud. So, they become frustrated when they see their costs rising and continue struggling to align with new business innovation.
If your IT department adopted widespread virtualization and thought it was cloud, my guess is you are probably nodding your head in agreement. Don’t worry, you’re not alone.
So then, what are the key elements an organization needs to achieve the speed, flexibility and agility promised by cloud?
1) Self-service portal and service catalog
The self-service portal is the starting point that customers use to order cloud services. Think of a self-service portal as a menu at a restaurant. The end user is presented with a standardized menu of services that have been defined to IT’s policies and standards and customers simply order what they need. Self-service portals greatly streamline resource deployment which reduces the manual effort by IT to provision resources.
2) Service delivery automation
After the user selects services from the portal service menu, then what? Well, under the hood should be automated service delivery—which is a defining characteristic of a real cloud environment. Behind each of the standardized menu items in the self-service portal is a blueprint or instructions that prescribe how the service order is delivered across the data center resources. This has been proven to appreciably simplify IT operations, reduce costs and drive business flexibility.
Read More »
Tags: amazon, CIAC, cloud, cloud infrastructure, Cloud Management, IAC, OpenStack, process automation, Self-Service Portal, UCS, vCloud Director, virtualization
Cloud computing is part of the journey to deliver IT as a Service which enables IT to change from a cost center to a business strategic partner. Forrester Research recently published a report that concluded, “Cloud computing is ready for the enterprise… but many enterprises aren’t ready for the cloud.”1 Yet Cloud deployments are happening – and I mean all types of Clouds – Private, Public and Hybrid. In other words, we have entered the World of Many Clouds.
Network touches everything and is a key building block for agile and scalable virtualized and Cloud-based data centers. Yesterday, I have introduced our new Nexus 6000 series and new 40 GE extensions to Nexus 5500 and 2000 Series. Today, I would like to introduce the very first services module for the Nexus 7000 Series.
Read More »
Tags: Cisco, cloud, Cloud Computing, Consolidation, convergence, data center, DCNM, FabricPath, fex, Hybrid Cloud, it-as-a-service, LISP, NAM, Network Analysis Module, nexus, Nexus 6000, Nexus 7000, NX-OS, OTV, private cloud, Public Cloud, Service Module, switch, Unified Fabric, virtualization
The evolution of the applications environment is creating new demands on IT and in the data center. Broad adoption of scale-out application architectures (i.e. big data), workload virtualization and cloud deployments are demanding greater scalability across the fabric. The increase in east/west (i.e. server-to-server) traffic along with the higher adoption of 10GbE in the server access layer is driving higher bandwidth requirements in the upstream links.
Following up on the introduction of 40GE/100GE on the Nexus 7000 Series, today we unveil the new Nexus 6000 Series, expanding Cisco’s Unified Fabric data center switching portfolio in order to provide greater deployment flexibility through higher density and scalability in an energy efficient form factor.
The Cisco Nexus 6000 Series is industry’s highest density full-featured Layer 2 / Layer 3 40 Gigabit data center fixed switch with Ethernet and Fiber Channel over Ethernet (FCoE) – an industry first!In addition to high scalability, Nexus 6000 Series offers operational efficiency, superior visibility and agility.
Some say “Nexus 6000 Series is a red carpet platform that will turn heads”. We agree! It’s because of …
Read More »
Tags: Cisco, Cisco ONE, cloud, Cloud Computing, Consolidation, convergence, data center, Fabric Path, FCoE, fex, Hybrid Cloud, it-as-a-service, LISP, nexus, Nexus 1000v, Nexus 6000, Nexus 7000, NX-OS, OTV, private cloud, Public Cloud, switch, Unified Fabric, virtualization