Speed is one thing that Cisco UCS and the NHL’s Minnesota Wild franchise share in common. If you have ever been to a professional hockey game you recognized and probably came to appreciate the speed, skill, and nimbleness of the players out on the ice. For Cisco UCS, speed is an attribute inherent in what we do, too – our compute business is highly competitive and requires constant, skillful, and quick innovation to deliver the best and newest in technology to our customers.
Simplify infrastructure to boost staff productivity,
Improve resource management for controlled growth
Promote sustainability to conserve resources and provide environmentally conscious facilities for clients.
Looking at their long-term goals for cloud computing, the Wild staff decided to invest in a solution based on our Unified Computing System™ (UCS®) servers with Tegile based hybrid storage solutions. In doing so, the Wild established for them a highly agile data center environment that supports their current and future cloud initiatives with a virtual desktop infrastructure solution. The end results of the I.T. transformation project for the Wild were impressive as they:
Achieved 43 percent reduction in support costs
Reduced power by 63 percent and heat output by 68 percent
Reduced data from 42TB to 17TB
Once again we see the UCS architecture delivering improved performance at lower operating costs for a Microsoft oriented environment – Dynamics and CRM. In the case of the Minnesota Wild, a small I.T. organization when compared with larger enterprise I.T. organizations, they were able to deliver significant business value to their organization and position themselves for future technology shifts. Read more about the Minnesota Wild and their Cisco UCS experience here.
The Microsoft ecosystem continues to be an important area of focus with our Cisco Datacenter product and solution teams. UCS, Nexus, UCS Management, and ACI teams have continued to innovate and launch new offerings during the past few months. Let’s take a look at our recent happenings and launches that can help enable your organization to deliver an optimal datacenter environment for your Microsoft platforms:
1. PASS, Microsoft SQL Server, and Cisco’s Unified Datacenter…
We’ve ramped up our focus on Microsoft’s SQL Server platform! Investments in PASS (Professional Association of SQL Server), Pass Summit 2013, and the PASS Virtualization Chapter afford us the opportunity to educate this world-wide BI and Data Management audience on UCS, Nexus, FlexPod, and VSPEX. Stay tuned for more from us, too as Microsoft gets closer to shipping their SQL Server 2014 platform.
2. Cisco and the Microsoft Cloud OS Launch…
Occurring around the world during the November 2013 to January 2014 timeframe is Microsoft’s Cloud OS launch – the coming out party for Windows Server 2012 R2 as well as System Center 2012 R2. Cisco will be a local sponsor at many of these events – in fact as of this writing we have wrapped up our participation at multiple events in Canada, Germany, and South Africa. Please stop by your local Microsoft Cloud OS event and visit the Cisco booth to learn more as well as listen to Cisco’s Chief Technology & Strategy Officer Padmasree Warrior on our Microsoft alliance and datacenter solutions.
3. Cisco, Microsoft, and the Application Centric Infrastructure (ACI)…
Microsoft was a key strategic partner at our recent New York City ACI launch with strong support from key Microsoft leaders such as Satya Nadella, Executive Vice President of their Cloud and Enterprise business unit. Satya shared the stage with Cisco CEO John Chambers in announcing ACI to the public. In addition, Microsoft’s Brad Anderson, Corporate Vice President, System Center blogged on the event and created a video on his ACI thoughts. Microsoft -- with their key platforms of Exchange, SQL Server, and SharePoint – sees ACI as a way to deliver ‘…datacenter without boundaries for our customers…’
Yesterday, Nov 6, Cisco unveiled details of the Application Centric Infrastructure with an ecosystem of partners that share our common view -- IT is in need of a transformation to create the Application Economy. Some key technology leaders spoke about the application lifecycle impact of an open and centralized policy model for complete infrastructure automation, including configuration, operation, monitoring, and optimization. I’d like to recap a few of those comments here today.
During the ACI announcement, Brad Anderson, Corporate Vice President in Microsoft’s Windows Server and System Center Group (WSSC), said that
virtualization has unshackled applications from the hardware in the past. But now with ACI we can do much more.So first of all, we can have the applications be able to describe their needs for more rapid provisioning. So with the view we can get across physical and virtual, we can see what is happening with the application, we can optimize the infrastructure for the application, and do more rapid troubleshooting.
…the integration with Microsoft cloud OS and UCS is really remarkable. Literally you have a common way to automate everything from the application, down to the operating system, down to all of the hardware level components. But ACI gives us the ability to do some really remarkable things..
Imagine how Exchange, Sharepoint and Linc -- being able to be shipped with ACI policies that now describe out how exactly the network should be configured, how it should be optimized, and automatically be provisioned across physical and virtual in a holistic way. That’s the kind of value we are going to be able to deliver together.
“…These new solutions are designed to improve business agility and reduce cost by driving infrastructure automation in support of core business processes and applications. This next-generation infrastructure will deliver increased application performance, resource pooling, visibility, automation and mobility through:
· Converged ACI stacks that include fully integrated versions of Windows Server 2012 R2 Hyper-V, System Center 2012 R2, SQL Server, Exchange and SharePoint”
I introduced the IT challenge posed by apps that behave differently in my earlier ACI post so now I want to point out that the new converged ACI stacks will fully integrate the operating system, orchestration, applications, server and network infrastructure to provide an enterprise customer with the application agility to rapidly deploy Exchange, SQL Server, and SharePoint, scale and upgrade them, and also to decommission them.
Many next generation distributed cloud applications are being written on open source platforms. For a view on what ACI means to a leading open source cloud platform, OpenStack, let me quote what Jim Whitehurst, President and CEO of Red Hat, said at the launch:
…there’s a whole set of functionality that is required to run a portfolio of true production applications and be able to run a diverse set of applications and to make sure that you can actually guarantee the performance levels that you need. The great thing about ACI is it provides that really differentiated functionality that enterprises need, even on open platforms, but at the same time, it does it with open standards, open APIs, and an open ecosystem so that customers get the benefit without being locked in and maintain the flexibility they are looking for going forward.
For more on Openstack and ACI, see this video – Application Policy and OpenStack – which explains how the DevOps community can extend agile processes to network infrastructure.
Guest post by Dennis Clark, Senior Solutions Marketing Manager -- Microsoft Applications, NetApp
We are here in Charlotte this week with our Cisco friends, with the opportunity to talk with all sorts of like-minded Microsoft SQL Server individuals at the 2013 SQL PASS Summit. The conversations vary in range from things like database performance, developer issues, to private cloud and data management concerns. We’ve also had some good chats with a few data warehouse folks, which prompted me to share some thoughts on this topic.
We know that the data warehouse (DW) is central to a comprehensive business intelligence (BI) solution. So clearly, if our DW isn’t up to snuff, as they say, then we can forget about delivering critical analytics to a growing number of LOB managers and execs. This, in turn, negatively affects the bottom line of the business, which isn’t good for anyone. And it isn’t getting any easier. Data is growing exponentially and the problem of integrating data from multiple sources isn’t going away any time soon. These issues, along with the complex interaction of the different components of a BI solution, continue to make the design, deployment and management of data warehouses a challenge. Now you can continue to throw money at it by over-provisioning and burning up valuable data center space and power to try to keep up, or you can strive to achieve a higher level of DW nirvana with Cisco and NetApp.
Guest post by Txomin Barturen, Senior Consultant – CTO Office, EMC Corporation
SQL Server provides customers with a vast array of technology options to address a diverse range of data and reporting requirements including extremely high throughput OLTP environments to bandwidth and time-sensitive reporting and DSS systems. With choice comes the inevitable complexity of defining and building solutions. Customer IT teams are invariably dealing with Service Level Agreements (SLAs) from their in internal customers. Time and financial constraints often limit the ability of internal IT teams to spend significant amounts of time in defining, testing and implementing the broad range of environments that they need to deploy.
Jointly, Cisco and EMC have partnered with Microsoft to deliver a set of solutions that are pre-validated to deliver the requirements for customer SQL Server environments. These solutions implement the collective best practices for server, network and storage, ensuring that customers implement a known valid configuration without the guesswork.
Fast Track Data Warehouse
Dealing with data warehouse requirements requires that solutions be designed to meet the ideal balance between performance, DW size, and cost. Design guidance from the SQL Server team dictates that the total data warehouse size be finely balanced by storage system configuration, server system consumption rate (how fast the CPUs are able to process the data) as well as the interconnectivity between server and storage to deliver at the required rate. To match server configuration, the interconnectivity (including HBAs) and the storage infrastructure requires much design, calculation and testing across a number of disciplines.