Sure, there are many events and conferences going on this week, but stick a reminder on your calendar to watch this week’s episode of Engineers Unplugged. Ed Saipetch (@edsai) of Speaking in Tech and other fames and Andre Leibovici (@andreleibovici) of VMware talk about the evolution of BYOD (Bring Your Own Device), VDI, EUC, and the changes brought about by new devices.
Bringing the 1970s office to you, unicorn style.
Welcome to Engineers Unplugged, where technologists talk to each other the way they know best, with a whiteboard. The rules are simple:
Episodes will publish weekly (or as close to it as we can manage)
Submit ideas for episodes or volunteer to appear by Tweeting to @CommsNinja
Practice drawing unicorns
What have been your challenges (IT or client side) as we move into the world of mobile employment and endlessly proliferating devices and apps? Post a comment here, or join the discussion on Twitter, #EngineersUnplugged.
Data Centres are evolving rapidly, in response to the many industry IT Megatrends we have previously discussed. Services and applications are increasingly being delivered from very large data centres and, increasingly, from hybrid and public clouds too.
Specifically, a good example of services being delivered from data centres is Hosted Desktops. I discussed in my last post how technologies such as TrustSec can help secure VXI/VDI deployments. VXI is a good example of a service originally delivered only from private data centres, now being delivered As A Service as well.
Video is (and will be) increasingly delivered from data centers as a service. Infrastructure services (servers/VM, storage…) are also delivered internally more and more through Private Clouds.
Consequently, securing those environments is now perceived by our customers CTOs and architects, as the biggest barrier to adopting clouds on a much larger scale.
We will therefore look at how TrustSec can pervasively help secure all data centre traffic. Read More »
We recently discussed the perfect IT storm that is currently brewing in business. BYOD, Unified Access, Video, the Many Clouds, SDN… all happening at once, on current infrastructure, and yet demanding more.
Some of the comments you made further emphasized the need to have an architectural approach.
Discussing VDI deployments with our customers in EMEAR, two things really are at the centre of our discussions from an infrastructure standpoint.
- Security, which I’ll discuss in today’s post.
- Latency and user experience. Two recent posts, here and here, provide great insight on how to tackle this challenge.
I have therefore asked Steinthor Bjarnason (firstname.lastname@example.org), Senior EMEAR Security Consultant, based out of Norway, to give me his perspective. He has 15 year experience in the security space and his perspectives are drawn from numerous customer projects, both in the Enterprise and the Service Provider space. Read More »
Can you see it? The end is nigh! The end of this blog series, not necessarily “the end” as in AMC’s the Walking Dead sort of end. Are you Zombie stumbling across this blog from a random Google search? Here is a table of contents to help you on your journey as we once again delve into the depths and address another question on our quest to answer… The VDI questions you didn’t ask, but really should have.
Got RAM? VDI is an interesting beast both from a physical perspective as well as the care and feeding of it. One thing this beast certainly does like is RAM (and braaaiiiins). Just in case I am still being stalked by that tech writer, RAM stands for Random Access Memory. I spoke a bit about Operating Systems in our 5th question in this series, and this somewhat builds upon that in regards to the amount of memory you should use. Microsoft says Windows 7 needs: 1 gigabyte (GB) RAM (32-bit) or 2 GB RAM (64-bit). For the purpose of our testing, we went smack in the middle with 1.5GB of RAM. Does it really matter what we used for this testing? It does a little – one, we need to have sufficient resources for the desktop to perform the functions of the workload test, and second, we need to pre-establish some boundaries to measure from.
Calculating overhead. In order to properly account for memory usage, we need to take into account the overhead of certain things in the Hypervisor. If you want to learn more about calculating overhead, click here. Here are a couple of things we are figuring in overhead for:
This was the test I most eagerly anticipated because of the lack of information on the web regarding running a Xeon-based system at a reduced memory speed. Here I am at Cisco, the company that produces one of the only blades in the industry capable of supporting both the top bin E5-2690 processor and 24 DIMMs (HP and Dell can’t say the same), yet I didn’t know the performance impact for using all 24 DIMM slots. Sure, technically I could tell you that the E5-26xx memory bus runs at 1600MHz at two DIMMs per channel (16 DIMMs) and a slower speed at three DIMMs per channel (24 DIMMs), but how does a change in MHz on a memory bus affect the entire system? Keep reading to find out.
Speaking of memory, don’t forget that this blog is just one in a series of blogs covering VDI:
Join us for a free webinar on March 27 discussing this blog series. Register here.
The situation. As you can see in the 2-socket block diagram below, the E5-2600 family of processors has four memory channels and supports three DIMMs per channel. For a 2-socket blade, that’s 24 DIMMs. That’s a lot of DIMMs. If you populate either 8 or 16 DIMMs (1 or 2 DIMMs per channel), the memory bus runs at the full 1600MHz (when using the appropriately rated DIMMs). But when you add a third DIMM to each channel (for 24 DIMMs), the bus slows down. When we performed this testing, going from 16 to 24 DIMMs slowed the entire memory bus to 1066MHz, so that’s what you’ll see in the results. Cisco has since qualified running the memory bus at 1333MHz in UCSM maintenance releases 2.0(5a) and 2.1(1b), so running updated UCSM firmware should yield even better results than we saw in our testing.
As we’ve done in all of our tests, we looked at two different blades with two very different processors. Let’s start with the results for the E5-2665 processor. The following graph summarizes the results from four different test runs. Let’s focus on the blue lines. We tested 1vCPU virtual desktops with the memory bus running at 1600MHz (the solid blue line) and 1066MHz (the dotted blue line). The test at 1600MHz achieved greater density, but only 4% greater density. That is effectively negligible considering that the load is random in these tests. LoginVSI is designed to randomize the load.