Many organizations are using server virtualization to consolidate application workloads in their datacenter. By using a highly efficient platform like Cisco Unified Computing System (UCS) organizations find that they can improve asset utilization, and dramatically lower IT costs. This enables the datacenter team to be more responsive to initiatives that produce real value for the business.
The server platform and virtualization address one part of the application delivery challenge for the global Enterprise. UCS can easily handle the compute requirements of complex applications, but what about the increased demand placed on the WAN as applications are delivered to a distributed workforce? How do you ensure an acceptable user experience? Accessing information over a WAN is much slower than accessing information over a LAN, due to limited WAN bandwidth, packet loss, and latency. To meet this challenge a solution needs to both scale the server platform and increase WAN performance.
Applications not only need to run fast in the data center; applications must run successfully for the end users wherever they may be. Organizations are finding that they must consider application acceleration as a part of their application solution architecture so that that they not only scale application performance on the server, but also delivery applications to reach remote sites with high performance to serve the users at every location.
“The IT department had already begun adopting server virtualization, using VMware ESX software on rack-optimized servers. But the new clinical data warehousing applications would require 16 additional VMware ESX servers, and the data center lacked sufficient I/O infrastructure and cabling. In addition, provisioning this quantity of physical servers would be difficult in the few weeks available” Sounds familiar ? As I was talking with customers at EMCWorld in Boston, but also at SAPPHIRE in Orlando over the two weeks, listening to customers from Chicago, New York, Frankfurt, London , Paris , or visiting UCS customers such as Pacific Coast Building Products Inc. , this concern seems to be all over the place. The challenge of Moses Cone Health System, a large healthcare provider based in North Carolina is not unique, but nevertheless it is critical for the success of the IT organization’s mission.
“We needed a cost-effective computing system that would enable us to expand our use of Electronic Medical Records quickly over the next year, minimize network infrastructure build-out, and reduce time to rack and configure servers,” explained also Michael Heil, manager of technology infrastructure, Moses Cone Health System.
Amongst the numerous benefits provided by the acquisition of the Cisco UCS , Michael Heil was keen to highlight :
So, this should be a good Friday post--its short and there are cash prizes involved.
Well, actually, its the first part of a post. I have a question for you readers. I have my own thoughts on the answer, but I wanted to see what other folks thought first, before I taint the discussion. I have been spending a fair number of cycles on a working group pulling together our company perspectives on cloud computing and where we fit in. While I was reading something on private clouds, something seemed oddly familiar, which leads me to the question:
Were mainframes the first private clouds?
Bear with me--there are a lot of similarities: based on virtualization, pooled resources, reallocation of resources via policy. Also, as someone pointed out to me the other day, some of the potential downsides like vendor lock-in, lack of portability, etc.
So what do you think? I have a Amazon or iTunes gift card for the best (or most entertaining) argument for and against. While you are making your argument, tell me what you think we can we learn from mainframe days that can help us today as we look a cloud models? For example, the mainframe folks certainly had the billing and accounting thing nailed.
So, what do you think?
PS For my next post, I am going to explore if the DEC VT100 was the first instance of VDI?
This morning SAP CTO Visha Sikka, VMWare CEO Paul Maritz, and Cisco CTO Padmasree Warrior announced to the SAPPHIRENOW crowd in Orlando and Frankfurt an unprecedented partnership between the Virtual Computing Coalition (VCE) and SAP designed to accelerate the customers virtualization and/or the potential adoption of cloud computing model This major announcement reinforced the message delivered yesterday by the SAP co-CEO Jim Hagemann Snabe at SAPPHIRENOW –“ SAP is definitely committed to a software and hardware stack approach” , and “decided to work with leaders such as Cisco, EMC and VMware …companies that are at the best at what they do”
This announcement was also immediately detailed by a panel of senior executives who discussed based on a real customer case studie, how VCE Vblock infrastructure Packages helps the SAP virtualization delivering huge benefits in terms of performances, speed of deployment and TCO. Member of the panel were -Tom Peck CIO at Levi Strauss & Co. -Pat Gelsinger President and COO EMC information infrastructure Products -Andre Hughes Global Managing Director of Accenture Cisco Business Group -Kevin Ichhpurani, SVP Global Software and Technologies SAP
At SAPPHIRENOW in Orlando, I met Manjula Talreja, Cisco VP Virtual Computing Environment coalition to better understand what this partnership between VCE and SAP means for our customers
The addition of SAP in the partner ecosystem is an additional proof of the huge interest of the customers for the unified computing approach and more specifically the Vblock concept - I met also at SAPPHIRENOW the team in charge of the SAP NetWeaver Adaptive Computing Controller , one of the major force inside SAP , in charge of helping the customer to move to virtualization- There is no doubt in my mind that this partnership represents a great opportunity for SAP customers to facilitate the virtualization journey envisioned by SAP technological leaders, and already embraced by early adopters Cisco SAP customers, such as Pacific Coast Building Products
On the heels of numerous VCE related announcements, discussions/discussions and demos/demos/demos/demos/demos at EMC World 2010, and the beginning of many announcements/announcements coming from SAP Sapphire this week, I thought it might be useful to put a few concepts in perspective. Specifically -- Cisco UCS, VCE Coalition, Vblocks, Acadia and the Journey to the Cloud. When new products and concepts are introduced to the market, there is often a period of time before they are fully understood.
Cisco Unified Computing System (UCS)
I’m not going to elaborate on the Cisco UCS in this section. Our partners and customers do a very good job of driving conversations about the solution on a daily basis in both public forums and Cisco communities. In the context of VCE and Vblock, I do want to clarify one point about UCS. While UCS is part of the Vblock Bill-of-Materials (BOM), it is not in any way restricted to only be sold in a VCE or Vblock configuration. While UCS does have design characteristics and integration points (here, here, here) that optimize it for virtualized Data Center environments, Cisco realizes that customers have a variety of computing needs (physical and virtual) and vendor preferences (applications, hypervisor, storage, management, etc.) and is fully committed to delivering the value of UCS to those environments.