I recently worked with Loughborough University on a financial impact study of their initial deployment of Cisco UCS. The study documents their findings of a dramatic improvement in IT efficiency, bearing out the advantages that attracted them to the UCS solution. Loughborough’s Customer Case Study has been revised with the results of this TCO study as well new details on the next stage of their deployment of Cisco Virtual Experience Infrastructure (VXI) Smart Solution.
We examined Loughborough’s projected growth rates and compared the continuation of their previous rack server environment against a UCS solution combined with an expansion of their virtualized environment. Server consolidation and reduced administrator workload contributed to exceptional results: a total savings of US$878,789 (40% OpEx and 60% CapEx) with a 225% ROI and 22% IRR. Compared to the previous environment, Loughborough’s UCS deployment will drive down cost in several key areas over the coming five years:
server hardware – 38%
switching infrastructure and cabling – 80%
power and cooling – 49%
new server provisioning – 79%
virtualization software – 39%
“When we compared the legacy server and network with one based on Cisco UCS, TCO effectively halves over a five-year investment lifecycle.”
Dr. Phil Richards, Director of IT, Loughborough University.
As a result of Cisco’s Unified Fabric approach, the study shows that Loughborough will need only six switches (three redundant pairs) to support their end state vs. 30 in their legacy environment and a corresponding reduction in cables from 646 to just 44.
Would you like to learn more about how Cisco UCS can help you? There are more than 250 published datacenter case studies on Cisco.com. Additionally, there is a TCO/ROI tool that will allow you to compare your existing environment to a new UCS Solution. For a more in-depth TCO/ROI analysis, contact your Cisco partner.
There is a lot of buzz in the market about Cisco Cloupia and how it is
positioned relative to other Cisco solutions such as Cisco Intelligent Automation for Cloud. The term cloud is often used interchangeably for automated infrastructure provisioning as well as for true clouds, as mentioned in my previous blog. To better understand where these solutions should play in your data center’s cloud journey, I offer the following explanation.
Historically, to keep pace with the growth of business applications and the data they generate, IT infrastructure resources were deployed in a silo-like configuration. One set of resources was devoted to one particular computer technology, business application or line of business. These resources were not always optimized and could not be reconfigured or shared to support varying workloads. Read More »
March Madness is here and in full effect. If you’re reading this post you probably aren’t paying close enough attention to the results pouring in from the round of 64. Today and tomorrow will make or break your bracket! Take appropriate action. As soon as I hit “publish” on this post I promise you that I will.
These ads come on the heels of a big push we’re making at Cisco to spread the good word about Unified Computing. We have print and digital ads running across the big tech pubs that talk about the very real application performance and IT operations benefits the UCS brings.
Can you see it? The end is nigh! The end of this blog series, not necessarily “the end” as in AMC’s the Walking Dead sort of end. Are you Zombie stumbling across this blog from a random Google search? Here is a table of contents to help you on your journey as we once again delve into the depths and address another question on our quest to answer… The VDI questions you didn’t ask, but really should have.
Got RAM? VDI is an interesting beast both from a physical perspective as well as the care and feeding of it. One thing this beast certainly does like is RAM (and braaaiiiins). Just in case I am still being stalked by that tech writer, RAM stands for Random Access Memory. I spoke a bit about Operating Systems in our 5th question in this series, and this somewhat builds upon that in regards to the amount of memory you should use. Microsoft says Windows 7 needs: 1 gigabyte (GB) RAM (32-bit) or 2 GB RAM (64-bit). For the purpose of our testing, we went smack in the middle with 1.5GB of RAM. Does it really matter what we used for this testing? It does a little – one, we need to have sufficient resources for the desktop to perform the functions of the workload test, and second, we need to pre-establish some boundaries to measure from.
Calculating overhead. In order to properly account for memory usage, we need to take into account the overhead of certain things in the Hypervisor. If you want to learn more about calculating overhead, click here. Here are a couple of things we are figuring in overhead for:
On Engineers Unplugged this week, we are trying something new, a double edition! First up in Episode 5, VCE’s Jay Cuthrell (@qthrul) and Nick Weaver (@lynxbat) talk shop in terms of Automation and the evolution of Open Source, including GitHub, and the role of Community in Tech solving problems. Amazing discussion with practical guidance on how you can get more involved:
Jay Cuthrell and Nick Weaver take the Community Unicorn Challenge!