On April 17th, 2013 Cisco announced SPECjbb2013 results with the Cisco UCS C220 M3 Rack Server delivering top SPECjbb2013 MultiJVM 2-socket x 86 performances.
Cisco’s results on the SPECjbb®2013 benchmark—41,954 maximum Java operations (max-jOPS) and 16,545 critical Java operations (critical-jOPS)— demonstrate that the Cisco UCS® C220 M3 Rack Server and Oracle Java Standard Edition (SE) 7u11 can provide an optimized platform for Java Virtual Machines (JVMs) and deliver accelerated response to throughput-intensive Java applications.
Exercising new Java SE 7 features, the SPECjbb2013 benchmark stresses the CPU processing, memory speed, and chip set performance capabilities of the underlying platform. The result consists of two metrics: the full capacity throughput (max-jOPS) and the critical throughput (critical-jOPS) under service-level agreements (SLAs), ranging from 10 to 500 milliseconds (ms) from request issuance to receipt of a response indicating operation completion.
To compete in the SPECjbb2013 MultiJVM category, the tested configuration consisted of a controller and two groups each consisting of a transaction injector and back-end, all running across multiple JVM instances within a single operating system image. The JVM instances ran on a Cisco UCS C220 M3 Rack Server. Two 2.90-GHz, 8-core Intel® Xeon® processor E5-2690 CPUs powered the Cisco UCS C220 M3 server running the Red Hat Enterprise Linux 6.2 operating system and Java HotSpot™ 64-Bit Server Virtual Machine Version 1.7.0_11. The Cisco UCS C220 M3 Rack Server and Oracle Java SE 7u11 delivered fast response times and high transaction throughput on the SPECjbb2013 benchmark. The system supported 41,954 max‑jOPS and 16,545 critical-jOPS, representing the best critical-jOPS 2-socket x86 result in the MultiJVM category
Cisco UCS based SPECjbb2013 benchmark results show that the Cisco UCS C220 M3 Rack Server delivers excellent scalability to JVMs and applications. SPECjbb2013 benchmark results show that the Cisco UCS C220 M3 Rack Server delivers more throughput within specified time frames than solutions from other vendors.
Cisco UCS delivers the scalability needed for large-scale Java application deployments. The dramatic reduction in the number of physical components results in a system that makes effective use of limited space, power, and cooling by deploying less infrastructure to perform more, work. Cisco UCS C220 M3 Rack Servers can operate in standalone deployments or be managed as part of the Cisco Unified Computing System for increased IT operation efficiency. For additional information on Cisco UCS and Cisco UCS solutions please visit www.cisco.com/go/ucs
SPEC and SPECjbb are registered trademarks of Standard Performance Evaluation Corporation. The performance results described are derived from detailed benchmark results available at http://www.speck.org/ as of April 22, 2013.
What do these three things have in common? For Lone Star College System (LSCS), the fastest growing community college in the U.S., these items helped build a whole new technology foundation.
While at a higher-education conference, CIO of LSCS, Link Alander, and former VP of data center virtualization at Presidio, Steve Kaplan, began hashing out what it would take to deliver the best computing experience—on a napkin. They jotted down all the ways technology could deliver a customizable, optimal, and educational platform to students and faculty.
The vision was a toolbox, not just any one tool: an entire resource pool for professors to contribute to -- and students to pull from -- anytime, on any device, from anywhere.
Back in January we launched a blogging series (with the above title) exploring the various server design parameters that impact VDI performance and scalability. Led by Shawn Kaiser, Doron Chosnek and Jason Marchesano, we’ve been exploring the impact of things like CPU core count, core speed, vCPU, SPECInt, memory density, IOPS and more. If you’re new to VDI and trying to avoid the pitfalls that exist between proof of concept and large scale production, this has hopefully been an insightful journey which has yielded some practical design guidance that will make your implementation that much more successful.
Here’s a snapshot of the ground
we covered along the way:
What? There’s a Whitepaper? (who doesn’t like free stuff?)
If you’re just catching up with us, and want a nice, complete, whitepaper-ized version of the series, this is your lucky day. You can get download the paper here.
VDI No-Holds-Barred Webinar!
Finally, last month, as part of the series we also offered a webinar on BrightTalk, where our panel of experts walked us through these design considerations exposed in the series, and fielded audience questions. It was one of those high quality interactions that hopefully provides great on-going usefulness to those who catch the replay.
If you missed the event, you can watch it here. The guys fielded a lot of great Q&A from our community, and in fact there were a few lingering questions we didn’t have time to address during the event. They’ve captured these for me (including their answers) provided below.
What’s Next? Got a Question?
I hope the journey was as impactful for you as it was for me – I should point out that the guys are considering what to attack as part of the next phase of their lab testing. I would highly encourage you to provide your input (or questions) be emailing us at email@example.com Let us know what’s on your mind, where we should take the test effort to better align with the implementation scenarios you’re facing, etc. Thanks!
Q&A From Our Web Event:
1) I have used Liquidware labs VDI assessment tool to help me understand how to accurately size my customer’s virtual desktops. Should I not be using tools like these?
Answer: These tools do a great job of looking at utilization on existing environments. The potential issue is that most of these tools only aggregate MHz utilization, there is no concept of SPEC conversion to properly map to newer processors. The other thing that we have seen with using this raw data and trying to fit it all in a particular blade solution is that there is usually no “overhead” of the VM taken into consideration. So sometimes it looks like you can have a 20 Desktop to a single physical core on a server and that’s just too aggressive when you look at typical vCPU oversubscription, etc. The bottom line is that these types of tools are great initial sanity checkers to validate the possibility of VDI consolidation. If you are involved in these types of assessments and are working on a Cisco UCS solution, we have tools that can assist in importing this type of data and helping you make more pointed recommendation’s as well. Just email firstname.lastname@example.org and we can discuss!
2) Do you find a performance increase/higher density hosts by scheduling similar vCPU count VM’s on the same hosts?
Answer: We did not test the mixing of 1vCPU and 2vCPU workloads to technically qualify an answer to see that impacts this would have – but this is a great idea and we will definitely consider this in our phase 2 testing.
3) Did you find giving more RAM to a VM caused the performance figures to decrease? E.g. 100 VMs at 4GM/VM compared to 100VMs using 1.5GB.VM
Answer: Since our testing was a static memory allocation of 1.5GB, we do not have the data to answer this particular question – again, another great idea to possibly include in our phase 2 testing.
4) Hi. A bit unclear on the last slide. 150 simultaneous desktops produce 39000 IOPs. Is this assuming physical desktops and figures were based on IOPs on each physical desktop. If so, I don’t see how the IOPs figure is relevant as it only on local disk, not SAN. Think I misunderstood the last slide!!
Answer: The 39000 IOPs was measured by both vCenter and the storage array controller as the total number of IOPs to boot 150 virtual desktops. No testing was done with physical desktops.
5) Loved the Cisco blogs regarding vCPU, SPEC, memory speed, CPU performance. Is there a similar piece of research that has been done regarding server VM performance rather than VDI?
Answer: Not *yet…. Hint hint.
6) Are there unique considerations for plant floor VDI deployments? The loads on those systems are typically higher on a continuous basis.
Answer: Specific use cases for VDI with different workloads definitely do exist and you should definitely size based on those requirements. If you feel your individual application requirements are not close to one of the pre-configured LoginVSI tests, the LoginVSI tool does allow for custom workload configurations where you can have it simulate working against your own apps.
In this week’s episode of Engineers Unplugged, join Gabriel Chapman (@Bacon_Is_King) and Dave Henry (@davemhenry) as they chart the evolution of virtualization, from mainframes up to software defined data centers. This is a technical deep-dive you don’t want to miss:
One thing that hasn’t evolved as much, the unicorn, shown here, fully virtualized:
Introducing the fully virtualized unicorn, courtesy of Gabriel Chapman and Dave Henry.
Welcome to Engineers Unplugged, where technologists talk to each other the way they know best, with a whiteboard. The rules are simple:
Episodes will publish weekly (or as close to it as we can manage)
The data center landscape has changed dramatically in several dimensions. Server virtualization is almost a defacto standard with a big increase in VM density. And there is a move towards world of many clouds. Then there is the massive data growth. Some studies show that data is doubling in every 2 years while there is an increased adoption of solid-state drives (SSD). All of these megatrends demand new solutions in the SAN market. To meet these needs, Cisco’s introducing the next generation Storage Network innovations with the new MDS 9710 Multilayer Director and new MDS 9250i Multiservice Switch. These new multi-protocol, services-rich MDS innovations redefine storage networking with superior performance, reliability and flexibility!
We are, once again, demonstrating Cisco’s extraordinary capability to bring to market innovations that meet our customer needs today and tomorrow.
For example, with the new MDS solutions, we are announcing 16 Gigabit Fibre Channel (FC) and 10 Gigabit Fibre Channel over Ethernet (FCoE) support. But guess what? This is just couple of the many innovations we are introducing. In other words, we bring 16 Gigabit FC and beyond to our customers:
A NEW BENCHMARK FOR PERFORMANCE
We design our solutions with future requirements in mind. We want to create long term value for our customers and investment protection moving forward.
The switching fabric in the MDS 9710 is one example of this design philosophy. The MDS 9710 chassis can accommodate up to six fabric cards delivering:
1.536 Tbps per slot for Fibre Channel – 24 Tbps per chassis capacity
Only 3 fabric cards are required to support full 16G line rate capacity
Supports up to 384 Line Rate 16G FC or 10G FCoE ports
So there is room for growth for higher throughput in the future …without forklift upgrades
This is more thanthree times the bandwidth of any Director in the market today – providing our customers with a superior investment protection for any future needs!