Cisco Blogs


Cisco Blog > Data Center and Cloud

Vegas!!! It’s All About Management… MMS 2013 that is…

February 19, 2013 at 3:44 pm PST

CasinoWheelWe've mentioned earlier that Cisco is again a platinum sponsor at this year’s Microsoft Management Summit in Las Vegas. We’ll showcase UCS, UCS Manager, PowerTool for PowerShell, and many other management solutions and technologies. Two speaking sessions are dedicated to Cisco technology as well - see details below.  So, don’t gamble away your chance to learn from Cisco Product Managers and Engineers on our integration with the Hyper-V Extensible Switch and System Center! Details on our presence can be found at www.cisco.com/go/mms

Virtual Networking Solutions for Microsoft Hyper-V environments
Session ID:
WS-B201
Date: TBD
Time: TBD

Server virtualization and cloud environments provide many benefits to enterprise customers. However, the dynamic nature of these environments and IT operational practices presents additional complexities for virtual machine networking. This session covers how Cisco virtual networking solutions (Cisco Nexus 1000V and Cisco UCS VM-FEX) can help simplify virtual networking. It shows how they help to enable consistency across physical and virtual networks and provide an advanced networking feature set. The session also details Cisco integration with System Center Virtual Machine Manager to provide a non-disruptive operational model to the server team. Attend this session to learn about Cisco Nexus 1000V and Cisco UCS VM-FEX architecture. Also discover the virtual networking services they bring to Hyper-V environments, and how they can simplify network operations.

Automate Your Infrastructure Programmatic Management of Cisco UCS with Microsoft System Center and PowerShell
Session ID: WS-B337
Date: Tuesday, April 9
Time: 10:15 11:30 a.m.

Cisco Unified Computing System (Cisco UCS) Manager improves process automation and policy management. This helps data center managers achieve greater agility and scale in their server operations while reducing complexity and risk. Join our Cisco Systems Management experts as they cover managing Cisco UCS with Microsoft System Center and PowerShell. The session will detail the integration of the Cisco UCS XML API with Microsoft management tools. This provides full views of Cisco UCS hardware inventory, monitoring, alerting, and automation of Cisco UCS using both Microsoft Systems Center and PowerShell. You will also see demonstrations of real-world examples showing programmatic bare-metal deployment and ongoing systems monitoring and management of Cisco UCS using Cisco UCS PowerTool and Microsoft Systems Center.

Learn more on Cisco’s technology and solutions for your Microsoft oriented datacenter at www.cisco.com/go/microsoft

Tags: , , , , , , ,

Cisco Domain Ten: Domain 6: Service Financial Management (with yet another free whitepaper!)

February 15, 2013 at 9:53 am PST

Service Financial Management is the focus of Domain 6 in Cisco Services' DomainTenSM Model for Data Center and Cloud Transformation. Closely related to the User Portal (Domain 4) and Service Catalog and Management (Domain 5), service financial management is one of those organizationally challenging topics for the data center management team - although with the advent of cloud services, is becoming more widely appreciated and in many cases (e.g. a service provider offering cloud services to businesses, a public sector organization offering services to other regional public service organizations), a mandatory part of your offer.  So let's discuss this area and I'll point you to a technical white paper from Cisco Services experts on this topic.

Cisco Domain Ten - Domain 6 - Service Financial Management

Cisco Domain Ten - Domain 6 - Service Financial Management

 

Read More »

Tags: , , , , , , , ,

Ain't your father's TCP

February 15, 2013 at 5:00 am PST

TCP?  Who cares about TCP in HPC?

More and more people, actually.  With the commoditization of HPC, lots of newbie HPC users are intimidated by special, one-off, traditional HPC types of networks and opt for the simplicity and universality of Ethernet.

And it turns out that TCP doesn't suck nearly as much as most (HPC) people think, particularly on modern servers, Ethernet fabrics, and powerful Ethernet NICs.

Read More »

Tags: , , , ,

Fabric-Based Infrastructure and Cisco UCS Servers

February 15, 2013 at 4:30 am PST

Fabric-Based Infrastructure and Cisco UCS

A good segue to Fabric-Based Infrastructure is Gartner’s Magic Quadrant for Blade Servers (March 2012), by Andrew Butler and George Weiss.  To fully understand the tie in with Fabric-Based Infrastructure I suggest reading the section on Cisco UCS.  Their observations are important because they tie directly to the subject of this blog.   You will also get a better feel for why Cisco UCS is having such rapid customer adoption worldwide.

The emphasis for Fabric-Based Infrastructure is delivering value-add functionality that enables data centers to operate more efficiently and cost effectively.  A good place to start is by looking at this Gartner report by George Weiss and Donna Scott - Fabric-Based Infrastructure Enablers and Inhibitors Through the Lens of User Experiences (April 2012).  In this short research note, George and Donna go into the key drivers and reasons for the FBI architecture and the benefits that their clients have seen.  My take away for the key benefits of Fabric-Based Infrastructure are:

  1. OpEx and CapEx savings
  2. Increased VM density
  3. Time-To-Deploy reduced from months to hours via automation and standards implementation;
  4. Reduce cost and complexity and improve agility;
  5. Improved resiliency by recreating servers and connectivity in minutes using profiles and templates

While reading about a technology innovation is helpful, actually listening to experts discuss the architecture and give their individual perspectives can be more so.

I suggest that you make time to listen to this 34 minute video with featured guest Donna Scott (a VP and Distinguished Analyst at Gartner) and Paul Perez (VP and CTO for the Data Center Business Group at Cisco Systems) - Fabric-Based Infrastructure (FBI) in Today’s Data Center.  Donna looks at the motivations and impact of customers moving to a Fabric Based Infrastructure with an eye toward what is important to adopters.  Then Paul discusses Cisco UCS innovations and how they let FBI adopters achieve their goals.  If you would like, you can download a podcast of the video from theCisco Analyst Reports page.

From my perspective the truly compelling part of this story is the extent to which Cisco UCS makes the promise of Fabric-Based Infrastructure a reality, while emphasizing safety, security and the risk reduction.  These are critical considerations in today’s IT environment.  Cisco continues to be a key innovator in data center technology and is continuing to grow from strength to strength, delivering value and benefit for your long term application solution needs.

Below is how I think a Fabric-Based Infrastructure should look.  Of course I am predisposed.  Cisco UCS architecture provides the ability to define and manage over 120 different server identity parameters via service profile templates, using a native tool with Roles Based Access Controls and across geographies.  UCS enables you to have a distributed environment that is centrally managed.  Your admins can also use CLI, custom designed tools / scripts, or third party tools as they choose to meet the needs of their current management structure. Cloyd

Read More »

Tags: , , , , , , , , ,

VDI “The Missing Questions” #3: Realistic Virtual Desktop Limits

So this is the Million Dollar Question, right? You, along with the executives sponsoring your particular VDI project wanna know: How many desktops can I run on that blade? It's funny how such an "it depends" question becomes a benchmark for various vendors blades, including said vendor here.

Well, for the purpose of this discussion series, the goal here is not to reach some maximum number by spending hours in the lab tweaking various knobs and dials of the underlying infrastructure. The goal of this overall series is to see what happens to the number of sessions as we change various aspects of the compute: CPU Speed/Cores, Memory Speed and capacity. Our series posts are as follows:

 

You are Invited!  If you’ve been enjoying our blog series, please join us for a free webinar discussing the VDI Missing Questions, with Doron, Shawn and myself (Jason)!  Access the webinar here!

But for the purpose of this question, let's look simply at the scaling numbers at the appropriate amount of RAM for the the VDI count we will achieve (e.g. no memory overcommit) and maximum allowed memory speed (1600MHz).

As Doron already revealed in question 1, we did find some maximum numbers in our test environment. Other than the customized Cisco ESX build on the hosts, and tuning our Windows 7 template per VMware's View Optimization Guide for Windows 7, the VMware View 5.1.1 environment was a fairly default build out designed for simplicity of testing, not massive scale. We kept unlogged VMs in reserve like you would in the real world to facilitate the ability for users to login in quickly…yes that may affect some theoretical maximum number you could get out of the system, but again…not the goal.

And the overall test results look a little something like this:

E5-2643 Virtual Desktops

E5-2665 Virtual Desktops

1vCPU, 1600MHz

81

130

2vCPU, 1600MHz

54

93

 

As explained in Question 1, cores really do matter…but even then, surprisingly the two CPUs are neck and neck in the race until around 40 VM mark. Then the 2 vCPU desktops on the quad core CPU really take a turn for the worse:


Why?

Co-scheduling!

When a VM has two (or more) vCPUs, the hypervisor must find two (or more) physical cores to plant the VM on for execution within a fairly strict timeframe to keep that VM's multiple vCPUs in sync.

MULTIPLE vCPU VMS ARE NOT FREE!

Multiple vCPUs create a constraint that takes time for the hypervisor to sort out every time it makes a scheduling decision, not to mention you simply have more cores allocated for hypervisor to schedule for the same number of sessions: DOUBLE that of the one vCPU VM. Only way to fix this issue is with more cores.

That said: the 2 vCPU VMs continue to scale consistently on the E5-2665 with its double core count to the E5-2643. At around the 85 session mark, the even the E5-2665 can no longer provide a consistent experience with 2vCPU VDI sessions running. I'll stop here and jump off that soap box…we'll dig more into the multiple vCPU virtual desktop configuration in a later question (hint hint hint)…

Now let's take a look at the more traditional VDI desktop: the 1 vCPU VM:


With the quad-core E5-2643, performance holds strong until around the 60 session mark, then latency quickly builds as the 4000ms threshold is hit at 81 sessions. But look at the trooper that the E5-2665 is though! Follow its 1 vCPU scaling line in the chart and all those cores show a very consistent latency line up to around the 100 session mark, where then it becomes somewhat less consistent to the 4000ms VSImax of 130. 130 responsive systems on a single server! I remember when it was awesome to get 15 or so systems going on a dual socket box 10 or so years ago, and we are at 10x the quantity today!

Let's say you want to impose harsher limits to your environment. You've got a pool of users that are a bit more sensitive to response time than others (like your executive sponsors!). 4000ms response time may be too much and you want to halve that to 2000ms. According to our test scenario, the E5-2665 can STILL sustain around 100 sessions before the scaling becomes a bit more erratic in this workload simulation.

021813_1657_VDITheMissi3.png

Logic would suggest half the response time may mean half the sessions, but that simply isn't the case as shown here. We reach Point of Chaos (POC!) where there is very inconsistent response times and behaviors as we continue to add sessions. In other words: It does not take many more desktop sessions in a well running environment that is close to the "compute cliff" before the latency doubles and your end users are not happy. But on the plus side, and assuming storage I/O latency isn't an issue, our testing shows that you do not need to drop that many sessions from each individual server in your cluster to rapidly recover session response time as well.

So in conclusion, the E5-2643, with its high clock speed and lower core count, is best suited for smaller deployments of less than 80 desktops per blade. The E5-2665, with its moderate clock speed and higher core count, is best suited for larger deployments of greater than 100 desktops per blade.

 

Next up…what is the minimum amount of normalized CPU SPEC does a virtual desktop need?

 

Tags: , , , , , , ,