According to recent studies like Cisco Global Cloud Index, 94% of all workloads will be processed by Private & Public Cloud Data Centers by 2021, so I guess that adopting the Cloud is not a matter of if, but a matter of when.

Once we accept it and understand the benefits, the quest usually begins with some questions like:

What do I move to the Public Cloud and what do I keep in the Private one?

How can I optimize and limit my Multicloud resources usage and cost?

What amount of resources will I need to buy in the Public Cloud(s)?

In a world where resources are limited, it is often recommended to use only what is needed and that should be the case for Multicloud IT resources as well. However, single-click automated experiences may make resources seem “limitless”, plus it is hard to know how much my apps & workloads actually need.

Just remember last time somebody asked you or your IT team for a new server or VM to provision a new Web Service for instance. They may have asked for 8vCPUs and 16G of memory according to the Web Admin experience or just because the recommended spec said so.

This type of requests may repeat often through the year and all of a sudden, you realize IT just ran out of servers in your Data Center (in the case of private cloud), or even budget to pay Amazon, Azure, Google or others (in the case of the public cloud).

We have 2 options now:

1. Ask for more budget to get new servers (in the case of private cloud) or increase your monthly OPEX (in the case of public cloud).

Assuming the budget gets approved, the person who requested the resources will have to wait until the new server gets ordered and delivered and    the business may be impacted due to lack of performance in the meantime. In the case of Public Cloud, once you increase the budget, you know you just established a new baseline and it will keep increasing from there.

2.  The smart one: Allow Analytics and Automation to help you repatriate resources and re-assign them to the apps that need them, using tools   like  Cisco Workload Optimization Manager (CWOM) in your Multicloud IT.

According to Forbes and a Stanford University Research in 2015, 30% of the Servers Worldwide (about 10 Million) had not delivered information or computing services in 6 months or more, receiving the designation of “comatose servers”. This means around 30 billion USD in Data Center capital sitting idle globally!

Instead of buying unnecessary additional resources for apps that actually need them while others are being underutilized, let’s analyze how the second option above may help addressing “comatose” resources.

By just installing an Analytics & Automation software solution that does not require any agents (like CWOM), you can now analyze how your current workloads operate and the amount of resources they need in a cloud and hardware-agnostic way.

As displayed in the video below, CWOM will constantly provide you with recommendations to keep your infrastructure right-sized based on demand or placed in the best-performing locations. It can also provide you with estimates on when more infrastructure will be needed and how much it would cost to do such acquisitions.

CWOM not only provides you with such recommendations, but may also automate some actions like scaling up or down resources such as CPU & Memory or moving a VM from a congested server to a less-congested one automatically.

This may be particularly useful in the Public Cloud as well, where you will pay by the meter.

In that case, CWOM provides recommendations such as lower-cost regions, resource right-sizing and even whether features like Reserved-Instances may help reducing that overall monthly bill depending on the usage of every workload.

There are many planning scenarios where CWOM may also help.

For example: Instead of migrating everything you have to the Public Cloud based on your current inventory, CWOM allows you to understand what resources you really need to pay for in the Cloud of your choice based on your workloads demands and best possible location, instead of just purchasing the same amount of resources you may currently have on-premise (which may be underutilized).

Other simulation scenarios like finding the best possible placement for new or additional loads as well as simulating hardware decommissioning and its impact to your workloads are also part of the solution.

We don’t need to go blind anymore and be part of that 30 Billion USD wasteland!

By using the power of Analytics and Automation, we now have a smart way of knowing how much resources our IT environments really need, and take automated action to optimize the use of our Multicloud Infrastructure.

Let’s bring those resources back from “coma,” we now have a smart option to leverage any cloud while saving some money, avoid budget-increase meetings and unhappy bosses while keeping apps and workloads performing!


Visit cisco.com/go/cloud to learn more about our Cisco cloud strategy.




Carlos Campos Torres

Technical Solutions Architect

World Wide Data Center & Virtualization