Another week of all the technology that’s fit to whiteboard, Engineers Unplugged features Chris Wahl (@chriswahl) and Steve Kaplan (@ROIDude) talking through cloud stack options, including Cisco Cloupia and Cisco Intelligent Automation for Cloud (IAC) as well as VMware’s vCloud Director (vCD) and vCloud Automation Center (vCAC). It’s ___aaS in the new cloud world. Great conversation from the partner perspective. Here we go:
Chris Wahl and Steve Kaplan with the very first UaaS (Unicorn as a Service). Is there anything the cloud can not do?
Welcome to Engineers Unplugged, where technologists talk to each other the way they know best, with a whiteboard. The rules are simple:
Episodes will publish weekly (or as close to it as we can manage)
On Engineers Unplugged this week, we are trying something new, a double edition! First up in Episode 5, VCE’s Jay Cuthrell (@qthrul) and Nick Weaver (@lynxbat) talk shop in terms of Automation and the evolution of Open Source, including GitHub, and the role of Community in Tech solving problems. Amazing discussion with practical guidance on how you can get more involved:
Jay Cuthrell and Nick Weaver take the Community Unicorn Challenge!
This was the test I most eagerly anticipated because of the lack of information on the web regarding running a Xeon-based system at a reduced memory speed. Here I am at Cisco, the company that produces one of the only blades in the industry capable of supporting both the top bin E5-2690 processor and 24 DIMMs (HP and Dell can’t say the same), yet I didn’t know the performance impact for using all 24 DIMM slots. Sure, technically I could tell you that the E5-26xx memory bus runs at 1600MHz at two DIMMs per channel (16 DIMMs) and a slower speed at three DIMMs per channel (24 DIMMs), but how does a change in MHz on a memory bus affect the entire system? Keep reading to find out.
Speaking of memory, don’t forget that this blog is just one in a series of blogs covering VDI:
Join us for a free webinar on March 27 discussing this blog series. Register here.
The situation. As you can see in the 2-socket block diagram below, the E5-2600 family of processors has four memory channels and supports three DIMMs per channel. For a 2-socket blade, that’s 24 DIMMs. That’s a lot of DIMMs. If you populate either 8 or 16 DIMMs (1 or 2 DIMMs per channel), the memory bus runs at the full 1600MHz (when using the appropriately rated DIMMs). But when you add a third DIMM to each channel (for 24 DIMMs), the bus slows down. When we performed this testing, going from 16 to 24 DIMMs slowed the entire memory bus to 1066MHz, so that’s what you’ll see in the results. Cisco has since qualified running the memory bus at 1333MHz in UCSM maintenance releases 2.0(5a) and 2.1(1b), so running updated UCSM firmware should yield even better results than we saw in our testing.
As we’ve done in all of our tests, we looked at two different blades with two very different processors. Let’s start with the results for the E5-2665 processor. The following graph summarizes the results from four different test runs. Let’s focus on the blue lines. We tested 1vCPU virtual desktops with the memory bus running at 1600MHz (the solid blue line) and 1066MHz (the dotted blue line). The test at 1600MHz achieved greater density, but only 4% greater density. That is effectively negligible considering that the load is random in these tests. LoginVSI is designed to randomize the load.
Last week I spoke at an event and the definition of social media came up. Some people refer to social networking tools when they speak of social media while others refer to the notion of engagement and content on the web. I’m more of a “gelato in a cone” kinda gal. I view social media as engagement and content (gelato) that lives in some kind of an “online container”, such as a social networking site or another web platform (cone). I’m looking for both. I would even argue that customer experiences, whether social or not, could and should be connected to optimize their journey. For example, social content can live on your web site and your social networking sites and conversations can be prominently featured at your events.
Building on the “gelato in a cone” interpretation of social media, we (@CiscoSocial) will be hosting a social media event for the savvy marketer in San Jose on April 18 and 19. Anyone and everyone is welcome to attend this free event as we bring together some super bright practitioners for 2 days of live chats and presentations. The practitioners that are lending their expertise and time to our event come from Twitter, LinkedIn, Kaiser Permanente, Walmart, Adobe, SAP, Intel, VMware, Citrix, ABC, eBay, Salesforce.com, MindShare, Engauge, Percolate, BuzzFeed, Performics, Digby, Blinq Media, Cisco, and more.
You may attend in person or via webcast, just please register ahead of time.
In the first few posts in this series, we have hopefully shown that not all cores are created equal and that not all GHz are created equal. This generates challenges when comparing two CPUs within a processor family and even greater challenges when comparing CPUs from different processor families. If you read a blog or a study that showed 175 desktops on a blade with dual E7-2870 processor, how many desktops can you expect from the E7-2803 processor? Or an E5 processor? Our assertion is that SPECint is a reasonable metric for predicting VDI density, and in this blog I intend to show you how much SPECint is enough [for the workload we tested].
You are here. As a quick recap, this is a series of blogs covering the topic of VDI, and here are the posts in this series:
Addition and subtraction versus multiplication and division. Shawn already explained the concept of SPEC in question 2, so I won’t repeat it. You’ve probably noticed that Shawn talked about “blended” SPEC whereas I’m covering SPECint (integer). As it turns out, the majority of task workers really exercise the integer portion of a processor rather than the floating point portion of a processor. Therefore, I’ll focus on SPECint in this post. If you know more about your users’ workload, you can skew your emphasis more or less towards SPECint or SPECfp and create your own blend.
The method to the madness. Let me take you on a short mathematical journey using the figure below. Starting at the top, we know each E5-2665 processor has a SPECint of 305. It doesn’t matter how many cores it has or how fast those cores are clocked. It has a SPECint score of 305 (as compared to 187.5 for the E5-2643 processor). Continuing down the figure below, each blade we tested had two processors, so the E5-2665 based blade has a SPECint of 2 x 305… or 610. The E5-2665 blade has a much higher SPECint of 610 than the E5-2643 blade with just 375. And it produced many more desktops as you can see from the graph embedded in the figure (the graph should look familiar to you from the first “question” in this series).
And now comes the simple math to get the SPECint requirement for each virtual desktop in each test system: