It’s a new year and we’re looking forward to an exciting 2015. Let pause to take one last look at 2014. For the teams from Cisco and Microsoft supporting our data center alliance, it was a very good year. We expanded our partnership, developed integrated solutions and aligned our field and partner teams to deliver even more customer value. We used our social and digital platforms to keep you informed and stay engaged. Here are some of the highlights.
An Expanded Partnership
After years of close collaboration, Cisco and Microsoft took our relationship to even greater heights with a new three year agreement. Jim McHugh, Cisco VP of UCS Marketing, blogged on how integrated solutions based on Cisco UCS, Nexus, and Microsoft Hyper-V, and Windows Server 2012 R2, enable I.T. organizations to dramatically improve effectiveness and accelerate customers’ journeys to the cloud. Cisco VP of ISVs and Global Technology Partners, Denny Trevett’s blog focused on how Cisco and Microsoft aligned our channel programs to encourage solution delivery in a partner-to-partner model.
In June, Cisco UCS achieved the top rank in the Americas’ x86 Blade Server Market measured by revenue market share, according to IDC. Microsoft CVP, Brad Anderson, took the opportunity to share his thoughts on the joint innovation from Cisco and Microsoft that is helping to drive this market momentum. Listen in.
The Cisco UCS® C460 M4 Rack Server continues its tradition of Industry leadership with the new announcement of the best non-clustered TPC-H benchmark result at the 1000GB scale factor, in concert with Microsoft SQL Server 2014 Enterprise Edition.
The Cisco UCS® C460 M4 Rack Server captured the number-one spot on the TPC-H benchmark at the 1000GB scale factor with a price/performance ratio of $0.97 USD per QphH@1000GBand demonstrated 588,831 queries per hour (QphH@1000GB), beating results from Dell, Fujitsu, and IBM.
The TPC-H benchmark evaluates a composite performance metric (QphH@size) and a price-to-performance metric ($/ QphH@size) that measure the performance of various decision-support systems by running sets of queries against a standard database under controlled conditions. For the benchmark, the server was equipped with 1.5 TB of memory and four 2.8-GHz Intel Xeon processor E7-4890 v2 CPUs. The system ran Microsoft SQL Server 2014 Enterprise Edition and Windows. Check out the Performance Brief for additional information on the benchmark configuration. The detailed official benchmark disclosure report is available at the TPC Results Highlights Website.
Some of the key highlights of Cisco’s TPC-H Benchmark results are:
High Performance for Microsoft SQL Server 2014: Cisco’s result is the fastest server at the 1000-GB scale factor running Microsoft SQL Server.
As illustrated in the graph below, the Cisco performance result beats Fujitsu, Dell, and IBM top results for the 1000-GB scale factor by 80, 31, and 13 percent respectively. Cisco’s price/performance ratio is 29 percent less than the IBM result
It is interesting to note that although all vendors have access to same Intel processors, only Cisco UCS unleashes their power to deliver high performance to applications through the power of unification. The unique, fabric-centric architecture of Cisco UCS integrates the Intel Xeon processors into a system with a better balance of resources that brings processor power to life. For additional information on Cisco UCS and Cisco UCS Integrated Infrastructure solutions please visit Cisco Unified Computing & Servers web page.
The Transaction Processing Performance Council (TPC) is a nonprofit corporation founded to define transaction processing and database benchmarks, and to disseminate objective and verifiable performance data to the industry. TPC membership includes major hardware and software companies. TPC-H, QphH, and $/QphH are trademarks of the Transaction Processing Performance Council (TPC). The performance results described in this document are derived from detailed benchmark results available as of December 15, 2014, at http://www.tpc.org/tpch/default.asp.
A Guest Blog by Partner Rick Heiges of Scalability Experts: Rick is a SQL Server Microsoft MVP and Senior Solutions Architect. He primarily works with Enterprise customers on their Data Platform strategies. Rick is also very involved in the SQL Server Community primarily through PASS and events such as the PASS Summit, SQL Saturdays, and 24 Hours of PASS. His tenure on the PASS Board of Directors saw the annual Summit triple in size from 2003 to 2011. You can find his blog at www.sqlblog.com.
So far, it has been another great week here at the PASS Summit 2014, SQL Server’s largest annual user and partner conference. With yesterday’s keynote address, there is still very much a focus on getting to the cloud and new investments in cloud technology in general. Microsoft seems to be extending its data collection and storage technologies in the cloud and also on-prem. One of the coolest features talked about was the concept of a “stretch tables” where a table that lives on your on-prem SQL Server can be “stretched” on to tables in SQL Azure Databases. The data may be shared so that the “hot” data can stay local and the “cold” data would live in the cloud. There were some other great demos around using the Kinect device to create a heat map of customer activity in a physical store (similar to what people linger and search for when shopping online). You can watch the PASS Summit 2014 Keynote here on PASStv.
As a Senior Solutions Architect with Scalability Experts, I work with large enterprise customers (Fortune 500 type) on a regular basis. There is more and more interest about leveraging the Public Cloud for some workloads and taking advantage of “on-prem” resources in a cloud-like way. This means deploying your internal resources in a similar way – for example via Cisco’s Microsoft Fast Track certified FlexPod or VSPEX integrated infrastructure solutions -- that public cloud resources are deployed with a similar chargeback (or ‘show back’) model and automating the self-service deployment of infrastructure, and the monitoring of the entire stack.
One of the things that I really like about Microsoft’s products is a focus on ease of use, tight integration, and low TCO. This is important to a lot of the customers that I interact with. This is why I have seen a surge in Cisco UCS products in my customer base of the past few years. Cisco has a similar goal to keep things simple and TCO low – read this Total Economic Impact report from Forrester on UCS ROI/TCO. Cisco also provides Management Pack plug-ins to Microsoft’s System Center suite for tight integration so that you can manage the entire stack (Hardware, Hypervisor, Application, and even Public Cloud) with a single tool. It is great to see how this partnership between Microsoft and Cisco can be beneficial to the customers that I work with.
Microsoft’s SQL Server 2014 also brings “In-Memory” Technology to OLTP in a cost-effective manner by not forcing a complete rewrite of the application. In a recent Cisco UCS on Microsoft SQL Server 2014 case study, Progressive Insurance was able to take advantage of this technology to further its strategy of its competitive advantage -- ease of use.
Eventually, I see the Public Cloud taking on a more “primary” role in the future. Similar to the “Everything on a VM unless there is a reason not to” mantra, I see an “Everything on a Public Cloud VM unless there is a reason not to” mantra on the long-term horizon. Until then, the Hybrid Cloud will be the default stance for many large enterprises.
A Guest Blog by Cisco’s Frank Cicalese: Frank is a Technical Solutions Architect with Cisco, assisting customers with their design of SQL Server solutions on Cisco Unified Compute System. Before joining Cisco, Frank worked at Microsoft Corporation for 10 years, excelling in several positions, including as Database TSP. Frank has in-depth technical knowledge and proficiency with database design, optimization, replication, and clustering and has extensive virtualization, identity and access management and application development skills. He has established himself as an architect who can tie core infrastructure, collaboration, and application development platform solutions together in a way that drives understanding and business value for the companies he services.
Ah yes, it’s that time of year again. It’s time for PASS Summit! I hope all of you are having a great event thus far. During my conversations with customers and peers, I am inevitably asked “Why should we implement SQL on UCS?” In this blog I cover this very common question. First off, for those of you not familiar with Cisco UCS, please visit here when you have a moment to learn more about this great server architecture. So, why would anyone want to consider running their SQL workloads on Cisco UCS? Read on to learn about what I consider to be the top reasons to do so…
High availability is one of the most important factors for companies when it comes to considering an architecture for their database implementations. UCS provides companies with the confidence that their database implementations will be able to recover quickly from any catastrophic datacenter event in minutes as opposed to the hours if not days that it would take to recover on a competing architecture. UCS Manager achieves this through its implementation of Service Profiles. Service Profiles contain the identity of a server. The UCS servers themselves are stateless and do not acquire their personality (state) until they are associated with a Service Profile. This stateless type of architecture allows for the re-purposing of server hardware dynamically and can be utilized for re-introducing failed hardware back in to production within five to seven minutes.
Service Profiles can provide considerable relief for SQL Server administrators when re-introducing failed servers back in to production. Service Profiles make this a snap! Just un-associate the Service Profile from the downed server, associate it with a spare server and the workload will be back up and running in five to seven minutes. This is true for both virtualized and bare-metal workloads! Yes! You read that last statement correctly!! Regardless of the workload being virtual or bare-metal, Cisco UCS can move the workload from one server to another in five to seven minutes (providing they are truly stateless i.e., booting from SAN).
Since every server in UCS that is serving a workload requires that a Service Profile be associated with it, Cisco UCS Manager provides the ability to create Service Profile Templates which ease the administrative effort involved with the creation of Service Profiles. Server administrators can configure Service Profile Templates specifically for their SQL Servers and foster consistent standardization of their SQL Server implementations throughout the enterprise via these templates. Once the templates are created, Service Profiles can be created from these templates and associated to a server in seconds. Furthermore, these operations can be scripted via Cisco’s Open XML API and/or PowerShell integration (discussed next) simplifying the deployment process even more.
To learn more about Service Profile Templates and Service Profiles, please visit here.
Manage Workloads Efficiently:
Cisco UCS has very tight integration with Microsoft System Center. Via Cisco’s Operations Manager Pack, Orchestrator Integration Pack, PowerShell PowerTool and Cisco’s extensions to Microsoft’s Hyper-V switch, administrators are able to monitor, manage and maintain their SQL Server implementations proactively and efficiently on Cisco UCS. Additionally, Cisco’s PowerTool for PowerShell, with its many cmdlets, can help to automate any phase of management with System Center thus optimizing the overall management/administration of Cisco UCS even more so. All of this integration comes as a value-add from Cisco at no extra cost!
Please visit http://communities.cisco.com/ucs to learn more about, download and evaluate Cisco’s Operations Manager Pack, Orchestrator Integration Pack and PowerShell PowerTool.
The world of data is changing. Businesses face growth in the volume of information and the types of data they encounter. There are new landscapes of vast and dynamic information that must be processed, managed and analyzed to achieve business insight. It is no surprise, therefore, that legacy infrastructures are failing to meet I.T.’s expectations.
For many of you this is why you are in Seattle this week – to attend PASS Summit 2014, the SQL PASS organization’s annual conference on SQL Server. You want to learn this week from your peers, from Microsoft, and from vendor’s ways to successfully harness SQL Server and drive solutions that do meet your business and user’s expectations.