Avatar

AI Ops includes the ability to dynamically optimize infrastructure resources through a holistic approach. Cisco Workload Optimization Manager is an important component in our strategy of delivering enhanced customer benefits through AI Ops.

Guest Blogger: Vish Jakka, Product Manager, UCS Solutions

Our Strategy for Delivering the Benefits of AI Ops

Cisco is executing a strategy to consistently enhance the customer benefits we deliver through AI-driven Operations (AI Ops). This blog is the latest in a series that describes our strategy, our open architecture, and how we are implementing each of the benefits. In the first blog in this series we defined four categories of benefits from AI Ops:

  1. Improved user experience
  2. Proactive support and maintenance
  3. Self-optimization of resources
  4. Predictive operational analytics

Multi-Dimensional AI Ops Strategy

Vendors use the terms AI, machine learning and AI Ops in a variety of ways. Their focus is primarily on hardware. Our strategy for delivering the customer benefits of AI Ops is a broader architectural vision. This vision includes infrastructure, workloads, and enhanced customer support in on-premises and cloud environments. Cisco’s strategy incorporates an open API framework and integrations with Cisco and partner platforms.

Infrastructure management is one dimension of AI Ops, and Cisco Intersight is an integral component of Cisco’s strategy. Managing workloads is another essential dimension, so Cisco Workload Optimization Manager (CWOM) is also an important component of this strategy.

AI Ops Portfolio Working Together

In a prior blog we explained how Intersight delivers an AI-driven user experience through our open API framework. We posted two blogs in this series to explain how Intersight delivers benefit #2, AI-driven proactive support and proactive maintenance. The proactive support is enabled through the Intersight integration with the Cisco service desk digital intelligence platform. This AI platform (internally referred to as BORG) is  used by the Cisco Technical Assistance Center. It includes AI, analytics, and machine learning. In this blog, I explain how we deliver benefit #3, the self-optimization of resources, through monitoring and automation with Cisco Workload Optimization Manager.

Self-Optimization of Resources

The self-optimization of resources includes both on-premises and public cloud infrastructure. You need to monitor and automate across a variety of virtualized environments, containers and microservices.  As we explained in this blog, the journey to the “self-optimization” requires a holistic approach.

In order to ensure that your applications continuously perform, and your IT resources are fully optimized, you need full visibility across compute infrastructure and applications, across networks and clouds…. and you need all this intelligence at your fingertips, so you can quickly and easily make the right decisions, in real-time to assure application performance, operate efficiently and maintain compliance in your IT environment.

Cisco Workload Optimization Manager is an AI-powered platform that delivers this functionality through integrations with Cisco’s multicloud portfolio, ACI, UCS management, HyperFlex, and a broad ecosystem of partner solutions that will continue to grow over time.  CWOM continuously analyzes workload consumption, costs and compliance constraints and automatically allocates resources in real-time. This video provides an overview of Workload Optimization Manager.

How Does AI Ops Work?

Resource allocation, workload scheduling and load balancing are concepts that have been critical to efficient IT operations for decades. Workload Optimization Manager uses AI and advanced algorithms to manage complex multicloud environments. It views on-premises resources and the cloud stack as a supply chain of buyers and sellers. CWOM looks for the options for running workloads and manages the resources as “just in time” supply to cost-effectively support workload demands, helping customers maintain a continuous state of application health.

Screen shot from CWOM showing cost analysis of pending actions

Many AI Ops solutions are complex to deploy, and they require require a significant amount of time to accumulate information before they can be effective for analysis. Workload Optimization Manager is easy to install, and the agentless technology will instantly begin to detect all the elements in your environment from applications to individual components. The unique decision engine curates workload demand, so it can generate faster, accurate recommendations after collecting data for a short period of time. CWOM uses three categories of functionality to optimize the use of available resources:

Abstraction: All workloads (applications, VMs, containers) and infrastructure resources (compute, storage, network, fabric, etc.) are abstracted into a common data model, creating a “market” of buyers and sellers of resources.

Analysis: A decision engine applies the principles of supply, demand, and price to the market. There are costs associated with on-premises infrastructure resources, and cloud providers price their resources based on utilization levels. The analytics ensure the right resource decisions are made at the right time.

Automation: Workloads are precisely resourced, automatically, to optimize performance, compliance and cost in real-time. The workloads become self-managing anywhere, spanning on-premises to public cloud environments.

These combined capabilities enable IT to assure application performance, at the lowest cost, while maintaining compliance with policy – from the data center to the public cloud and edge.

For additional information:



Authors

Ken Spear

Sr. Marketing Manager, Automation

UCS Solution Marketing