We don’t invoke the term innovation lightly at Cisco. As Frank Palumbo recently talked about, change is the only constant, and our data center customers need to stay in front of that change. What we’re hearing from them often centers on three critical concepts:
1. We need a common operating environment that spans from the data center to the very edge. “Edge” in this sense is used to describe the many worlds that exist beyond the walls of the data center, where the demand for computing power is inexorably growing. For service providers that can mean IT infrastructure located at the Customer Premise. For large enterprise and public sector IT teams the Edge is found in the branch offices, retail locations and remote sites where innovation is exploding with dynamic customer experiences and new ways of doing business. It’s at the wind farm and the end of the drill bit miles below the oil rig. It’s in the “fog” of connected sensors and smart objects in connected cities. And it is the handheld devices that billions of people are using today to consume and generate unprecedented volumes of data and insight, and the 50 billion people and things that Cisco estimates will be connected by 2020.
2. We need a stronger engine to accelerate core applications and power data-intensive analytics. (AKA, “you’re going to need a bigger boat”) The imperative for faster and better decisions has never been greater and the tools to extract the signal from the noise in the data deluge require big horsepower. Recommendation engines, real-time price optimization, personalized location-based offers, improved fraud detection… the list goes on in terms of opportunity created by Big Data and the IoE. All while IT continues to deliver the core applications -- that keep business running – uninterrupted and faster than before.
3. We need a common operating environment that spans traditional and emerging applications. Complexity is the bane of innovation and the bane of IT. In addition to the familiar workloads, which are well understood in terms of bare metal scalability and virtual encapsulation, there is growing use of applications architected for massive horizontal scale. In-memory, scale up analytics are being utilized right alongside cloud-scale technologies like MapReduce to tackle different elements of business problems in different ways. Very different architectures, with very different demands on computing infrastructure. The conditions for complexity loom. Will a hero emerge?
When UCS was born it shook up many of the fundamental assumptions of what data center infrastructure should be expected to do and what IT could do to accelerate business. With this launch, history repeats itself, as we work to help customers future proof the data center for change tomorrow and transformation today. Our development team has taken the next stride in the journey of re-inventing computing at the most fundamental levels, to power applications at every scale.
I hope you will join us for the event on 9/4 to see how we’re taking our strategy forward in the data center. We have a bit of a baseball theme in the launch since we’re delighted to be joined by Major League Baseball’s Joe Inzerillo at our event in New York. So follow the conversation at it unfolds over coming weeks with #UCSGrandSlam and #CiscoUCS. The bases are loaded.
The classic work of English historical literature “The Decline and Fall of Roman Empire” is a book written by the English historian Edward Gibbon, which traces the trajectory of Western civilization from the height of the Roman Empire to the fall of Byzantium. I’m using this comparison to bring to light discussions that my team and I have conducted over the past few years on the topic of as- a-Service (IaaS, PaaS, SaaS, etc.) in the Public and Private Clouds.
Shifting your workloads to the cloud, whether public or private, looks attractive in a number of ways. You conceptually see the gains from quick and readily available infrastructure by just clicking a button or two, your service from a new virtual machine in the cloud appears ready as you need it. The initial gains materialize as on-demand capacity, high availability, and disaster tolerance to name a few. What about the costs of building all of this and has anyone ever seen a positive gain? Has anyone really seen a gain from IaaS alone?
Public and Private cloud services models are still maturing but the overall question that we are hearing, is it worth it? We’ve come across several articles that look at the features, and functions of as-a-Service offerings (to include PaaS, IaaS, SaaS, etc) along with theoretical return on investment (ROI) of each. What we have seen is the shift in focus that sole IaaS eventually plays into higher delivery models like BPaaS (Business Process as-a-Service), or SaaS (Software as-a-Service) etc. Of course the message is different between Enterprises and Service Providers where this could help focus more reliable revenue flows for Service Provider’s and a more deliberate approach for Enterprises.
In the months spent researching this, we never found a definitive paper or published research outside of system integrators or service providers that had actual projected financials for SP or Enterprise. Also, given the financial calculations were heavily weighted on the ROI models from specific vendor equipment vs any diversity in mixed infrastructure environments. In further calculation of the costs for IaaS, requirements from Service Providers or Enterprise do not involve simple scenarios where the predictable medium based Virtual Machine would suffice as a definitive control point for those calculations. We’ve seen the requirements need to align in the form of complex workloads such as database and transaction processing that require more robust, and more expensive IaaS-class VMs within diverse infrastructure, distributed about multiple tiers. Regardless of the requirements category, multiple small scale and diverse control projects are needed to gather precise cost, performance, and availability metrics to validate the real cost and ROI IaaS models. IaaS, for the most part, has to increase it’s service offerings to go further into areas like Virtualized Desktops (VDI), offer enhanced security for data, and potentially pay-per-use capacity on demand services just to name a few. At that point, IaaS is moved from it’s rudimentary form to more of a superset like PaaS, BPaaS, SaaS, etc. One thing to keep in mind, PaaS is very closely associated to the lower end services similar to IaaS where it’s monetization and revenue generation is almost identical.
In summary, we see the Cloud is here to stay but there is a decline in the need for just a simplistic offering of services beyond what is IaaS. The enhanced subset of services must move away from IaaS to be more like BPaaS, SaaS and other models to cost effective. Businesses, whether SP or Enterprise, are going to leverage those services in their market and effect significant changes in the way they operate. The budgets that once filled groups of individual business units (speaking in the context of the Enterprise) to accommodate for their own IT presence, are now consolidated to larger capital and revenue budgets for enhanced IT subscription services that go far beyond what used to be just cloud infrastructure.
Data centers are rapidly adopting integrated infrastructure to run their businesses. However, the monitoring and management tools for these integrated components are not yet converged. Performance monitoring and capacity planning is done with multiple tools leading to sub-optimal resource management. Customers have to spend multiple days or weeks to install and configure typical agent-based solutions. Then they will have to train additional personnel to manage and maintain these multiple tools.
Cisco UCS Performance Manager provides visibility from a single console into Cisco Unified Computing System (UCS) components for performance monitoring and capacity planning. It provides data center administrators assurance for Cisco UCS and other integrated infrastructure implementations (e.g. FlexPod, Vblock, VSPEX) and ties application performance to physical and virtual infrastructure performance allowing IT staff to optimize resources and deliver better service levels to customers.
Cisco UCS Performance Manager was built in partnership with Zenoss. UCS Performance Manager is a virtual appliance designed for easy installation and configuration. UCS Performance Manager install and setup takes less than an hour, a departure from typical customer experience. Main customer requirements have been built into the product to provide key performance indicators out of the box. It provides a customizable dashboard for a quick view of the components of interest.
One of main constructs of UCS Performance Manager is user defined host groups. Customers can put related hosts in a group and UCS Performance Manager will automatically figure out the underlying infrastructure those hosts and applications are dependent on. This dynamic view provides the relationship and health of the hosts and related infrastructure for a quick glance of the status. Each of the devices and components can be clicked for more information about them.
From UCS perspective some of the frequent questions is how to find the bandwidth utilization of the server ports connected to servers, fiber channel uplinks going to SAN, Ethernet uplinks going to LAN and so on. UCS Performance Manager will provide this information in graphical and tabular form for easy consumption. Bandwidth utilization and health information overlay is applied on a UCS topology view for easy consumption.
By utilizing the views and trend graphs available customers can quickly identify any congestion points in the integrated infrastructure to prevent application performance degradation proactively. Administrators can provision additional resource to ease congestion or move load across the integrated infrastructure based on business needs.
With UCS Performance Manager in the data center customers will:
Have a better understanding of UCS integrated infrastructure at a component level
Maintain and provide high level of service with optimal resource allocation
Save time and resources with a single pane of glass for integrated infrastructure monitoring
We are presenting an overview of UCS Performance Manager at the Cisco booth at VMworld next week. If you are attending the event stop by for a demo and live Q&A. Get a first look on the show floor at the Cisco booth #1217.
While at VMworld attend the theater presentation and break out sessions for more information.
1. “Performance and Capacity Management for Cisco Converged Infrastructures”
Tuesday August 26, 2:00pm, Cisco booth #1217
2. “Management and Automation for UCS Integrated Infrastructure”
Tuesday August 26, 3:30pm, Room 3022 Moscone West
The Cisco UCS® C240 M3 Rack Server captured the number-one spot for overall price/performance on the TPC-H benchmark at the 1000GB scale factor with a price/performance ratio of $0.73 USD per QphH@1000 GB and demonstrated 304,362 queries per hour (QphH@1000GB), making it the fastest two-socket server running Microsoft SQL Server 2014.
The TPC-H benchmark evaluates a composite performance metric (QphH@size) and a price/performance metric ($/QphH@size) that measure the performance of various decision support systems by running sets of queries against a standard database under controlled conditions. As tested, the benchmark configuration consisted of a Cisco UCS® C240 M3 Rack Server equipped with 768 GB of memory and two 2 Intel Xeon E5-2690 v2 processors. The system ran Microsoft SQL Server 2014 Enterprise Edition and Windows Server 2012 R2 Standard Edition. Check out the Performance Brief for additional information on the benchmark configuration. The detailed official benchmark disclosure report is available at the TPC Results Highlights Website.
Some of the key highlights of Cisco’s TPC-H Benchmark results are:
Best Price/Performance: The Cisco UCS® C240 M3 Rack Server captured the number-one price/performance spot on the TPC-H benchmark at the 1000-B scale factor with a price/performance ratio of $0.73 USD per QphH@1000GB. This result beats 8-socket HP ProLiant DL980 G7 runningMicrosoft SQL Server at 219,887 QphH and $ 1.86 USD/QphH@1000GB by 60 percent.
Best Two-Socket Server Performance for Microsoft SQL Server 2014 Cisco demonstrates 304,362 queries per hour (QphH@1000 GB), making it the fastest two-socket server running Microsoft SQL Server.
Steve Jobs is arguably the most amazing innovator of our times. I recently read some of his thoughts on innovation. His statement “Innovation distinguishes between a leader and a follower,” caused me to reflect upon my eight-year association with data virtualization, and consider who in the IT analyst community have been the innovative leaders.
The role call of top analysts doing innovative work continues with Noel Yuhanna of Forrester who wrote the analyst community’s first research paper on data virtualization in January 2006, Information Fabric: Enterprise Data Virtualization.
Gartner’s Ted Friedman and Mark A. Beyer, and more recently Merv Adrian, Roxane Edjlali, Mei Selvage, Svetlana Sicular and Eric Thoo, have been both descriptive and proscriptive about the use of data virtualization as a data integration delivery method, a data service enabler and a key component in what Gartner calls the Logical Data Warehouse.
Further, there have been myriad analysts who have amazing contributions.
The learned trio of Dr. Barry Devlin, Dr. Robin Bloor, and Dr. Richard Hackathorn have pushed the art of the possible.
While analyst / practitioners such as Jill Dyche, Mike Ferguson, Rick Sherman, Steve Dine, Evan Levy, David Loshin and William McKnight, via their hands-on client work, have “kept data virtualization grounded on reality street,” to quote Mike Ferguson.
And let’s not forget the Massachusetts’ Waynes — Wayne Eckerson formerly of TDWI and Wayne Kernochan, author of the eponymous Thoughts From a Software IT Analyst blog. Their voices and insights have proven invaluable.
To quote Gene Rodenberry, “It isn’t all over; everything has not been invented; the human adventure is just beginning.” The same is true for data virtualization. So I look forward to more great insights from these innovators, as well as a new generation led by Puni Rajah of Canalys and Vernon Turner of IDC.
To see Rick van der Lans and Barry Devlin on stage and gain even more insights from the 2014 Data Virtualization Leadership Award winners, join us at Data Virtualization Day 2014 on October 1 in New York City.
Watch for a sneak peek of Data Virtualization Day 2014.
To learn more about Cisco Data Virtualization, check out our page.