On March 19th, 2013 Cisco announced the best 2-socket virtualized SAP Sales and Distribution (SD) Benchmark result in a Linux environment with the Cisco Unified Computing System™ (Cisco UCS®) delivering high scalability and low latency in virtualized SAP Business Suite deployments.
Cisco’s benchmark result for the Cisco UCS B200 M3 Blade Server show support for up to 5530 concurrent users and a SAP Application Performance Standard (SAPS) score of 30,270 derived from the processing of 605,330 order line items per hour and 1,816,000 dialog steps per hour. This result demonstrates that a Cisco UCS B200 M3 Blade Server configured with a LSI 400-GB SLC WarpDrive can deliver high scalability and low latency in virtualized SAP Business Suite deployments.
The tested configuration consisted of a Cisco UCS chassis equipped with one Cisco UCS B200 M3 Blade Server running Red Hat Enterprise Linux (RHEL) 6.4 on KVM. The server was configured with two 2.90-GHz, 8-core Intel Xeon processor E5-2690 CPUs and 256 GB of 1600-MHz memory. The blade server ran both the SAP Business Suite application software and the 64-bit Sybase ASE 15.7 in a single virtual machine. SAP Enhancement Package 5 for SAP Enterprise Resource Planning (ERP) 6.0 was used in this scenario. The Cisco UCS B200 M3 Blade Server recorded the best two-way virtualized SAP SD Benchmark result on SAP Enhancement Package 5 for SAP ERP 6.0 and Sybase ASE 15.7. In the test, 5530 SAP SD Benchmark users were supported while a consistent application response of less than one second was maintained.
Many business organizations currently struggle with the cost of maintaining RISC processor–based servers running proprietary operating systems and third-party database management systems. Cisco UCS enables organizations to use lower-cost industry-standard x86-architectureservers, open source operating systems, database management systems, and allows organizations to run SAP Business Suite applications in virtualized environments. With Cisco UCS, organizations can easily balance workloads across a pool of servers to manage service levels according to business priorities, scale environments up and down as needed, and contain costs by consolidating workloads onto a smaller number of servers.
Using the Cisco UCS, IT departments can run virtualized SAP Business Suite applications with the flexibility, scalability, and lower cost of virtualized environments. These innovations delivering high scalability and low latency in virtualized SAP Business Suite deployments and the dramatic reduction in the number of physical componentsrequired illustrates the value created by Cisco UCS solution for customers planning migration away from proprietary RISC/Unix based systems to open source operating system software and standards-based computing infrastructure.
The balance of power is shifting to emerging economies. Compared to stagnant Western markets, business growth and investment in the Middle-East, Africa and Russia (MEAR) continues unabated.
It’s a shift that’s amplified by technological advances. In under a decade, these seismic changes have levelled the playing field, opened the door to a global market and made rapid business growth a reality:
The connected world where we can work, play and learn anytime, anywhere and with anyone.
Virtualization made it easier to manage multiple servers and reduce physical computing power.
Computing power has exponentially increased capacity and processing speeds so we can do much more for a lot less time and money.
The cloud offers all the applications and storage businesses need minus the server infrastructure.
You’ll probably point out many other factors, but I picked these because they are particularly relevant to MEAR countries and their IT spending patterns. Specifically, they are backed up by Forrester research in 2012. This showed that over half of MEAR-based companies plan to invest more in mobility, analytics, security and collaboration.
Unlike more mature companies, their spending isn’t being eroded by having to maintain and support legacy systems. This frees up budgets to completely replace or expand their IT in ways that improve their competitiveness. The top three areas that Forrester highlighted from 2011 to 2012 were mobile apps (spending increase of 47%), business intelligence (44%) and collaboration tools (41%).
Further research was carried out by Canalys in February 2012 of its online channel community – resellers, systems integrators, service providers and distributors. The results showed a positive outlook across MEAR despite ongoing economic uncertainty. Over half emphasized a move from capital expenditure to operating expenditure, with the highest demand for IT services expected from small to midsize companies (with 100-499 employees). As a respondent said, “Companies working their way out of the crisis by expanding.”
As more companies seek new technologies to secure future growth, our partner network across MEAR needs to be ready to help them become the technology leaders of tomorrow.
*Forrester, 2012, Forrsights: Cautious Optimism in 2012 IT Spending Plans -- A BT Futures Report
*Canalys, 2012, Navigating through dramatic industry change
Kiss your old running shoes good-bye. Change is constant. And technology has always been about change and convergence. But the massive, global-scale change occurring now is happening at rates faster than anyone ever predicted.
And this is disruptive change. It’s change that requires you to act, adapt, and move quickly to take advantage of the opportunities that come with it.
Cisco has a long history of showcasing disruption and convergence at Enterprise Connect since the early days of VoiceCon. TDM to voice over IP; the convergence of voice, video, and data; unified communications: In each case we saw how converging technology and collaborative behavior has helped disrupt the traditional way of doing things and created more value for businesses and users.
Today technology is creating disruption in unexpected places.
In this week’s episode of Engineers Unplugged, WWT’s Dave Kinsman (@virtualizethis) and Chris Gebhardt (@chrisgeb) take on the current buzz in the end-user computing space. Listen in on all things VDI, from storage to flash:
Welcome to Engineers Unplugged, where technologists talk to each other the way they know best, with a whiteboard. The rules are simple:
Episodes will publish weekly (or as close to it as we can manage)
This was the test I most eagerly anticipated because of the lack of information on the web regarding running a Xeon-based system at a reduced memory speed. Here I am at Cisco, the company that produces one of the only blades in the industry capable of supporting both the top bin E5-2690 processor and 24 DIMMs (HP and Dell can’t say the same), yet I didn’t know the performance impact for using all 24 DIMM slots. Sure, technically I could tell you that the E5-26xx memory bus runs at 1600MHz at two DIMMs per channel (16 DIMMs) and a slower speed at three DIMMs per channel (24 DIMMs), but how does a change in MHz on a memory bus affect the entire system? Keep reading to find out.
Speaking of memory, don’t forget that this blog is just one in a series of blogs covering VDI:
Join us for a free webinar on March 27 discussing this blog series. Register here.
The situation. As you can see in the 2-socket block diagram below, the E5-2600 family of processors has four memory channels and supports three DIMMs per channel. For a 2-socket blade, that’s 24 DIMMs. That’s a lot of DIMMs. If you populate either 8 or 16 DIMMs (1 or 2 DIMMs per channel), the memory bus runs at the full 1600MHz (when using the appropriately rated DIMMs). But when you add a third DIMM to each channel (for 24 DIMMs), the bus slows down. When we performed this testing, going from 16 to 24 DIMMs slowed the entire memory bus to 1066MHz, so that’s what you’ll see in the results. Cisco has since qualified running the memory bus at 1333MHz in UCSM maintenance releases 2.0(5a) and 2.1(1b), so running updated UCSM firmware should yield even better results than we saw in our testing.
As we’ve done in all of our tests, we looked at two different blades with two very different processors. Let’s start with the results for the E5-2665 processor. The following graph summarizes the results from four different test runs. Let’s focus on the blue lines. We tested 1vCPU virtual desktops with the memory bus running at 1600MHz (the solid blue line) and 1066MHz (the dotted blue line). The test at 1600MHz achieved greater density, but only 4% greater density. That is effectively negligible considering that the load is random in these tests. LoginVSI is designed to randomize the load.