The relationship between compute infrastructure and application development used to be something like the unforgettable slogan from “Field of Dreams.”  It was a “build it and the apps will come” model with application architectures aligning to the specific intricacies of the compute platforms and their operating systems. Application developers and end-users were at the mercy of infrastructure teams and the quality of infrastructure and operations determined how quickly you could develop and deliver your apps.

The cloud changed all that and flipped the model for good.

Today, application velocity and agile software development are considered business-essential and a competitive advantage. A famous example of this is JP Morgan Chase’s CEO telling shareholders about the number of software programmers his company has employed and the number of apps they’ve rolled out. Imagine that happening in the 1980s!

Application Velocity: The Metric that Matters for Business Today

Success or failure in the next era of compute infrastructure innovation and operations will be based on how agile your infrastructure is and how seamlessly your operations can adapt to the ever-changing requirements and velocity of the app. The velocity and the changes in app requirements are not going to slow down. In fact, the upcoming exciting hardware technology disruptions over the next five to 10 years will only accelerate the volume and velocity of app development. Then it’s not hyperbole to claim that this is an evolve-or-die moment for compute infrastructure developers and operations.

One natural result of application velocity and hardware innovations is the operational complexity. This is especially true if IT teams choose the traditional method of aligning discrete compute systems to meet specific application requirements. This makes it difficult to adapt, scale, and maximize efficiency, resulting in higher power and cooling costs, forklift upgrades to accommodate new technologies. Perhaps most importantly, it leads to an inability to quickly respond to dynamic changes in application requirements.

The Era of Programmable Infrastructure

All roads are leading to the need for a standardized, common, and programmable infrastructure that acts more like fluid pools of resources rather than fixed silos that inhibit sharing and change. This is a compute system that can change resource allocations with the speed and ease of software. Automation and consistency of management is not an afterthought but part of the ground up design as it is the key to success in delivering flexibility, fluidity, and simplicity of usage in the hybrid cloud world.

This “programmable” compute system must align to a cloud operating model to match the needs of your applications without compromising on simplicity and velocity. This also means being operable from a management framework that is designed to manage hybrid cloud infrastructure to deploy, manage, and optimize modern, cloud-native applications deployed in your private cloud and seamlessly move apps into the public cloud.

This is what Cisco Intersight and the UCS X-Series were born to do. The UCS X-Series ushers in a new era of compute platform architecture that was designed ground-up with apps and hybrid cloud operations (and Intersight) in mind. It was “reverse-engineered” with a workload-first approach to achieve the totality of the integrated stack and the public cloud-like operational experience on-prem. The X-Series is equally versed in supporting traditional enterprise workloads that are resident on-prem or cloud-native applications that typically reside in public clouds.

And like modern apps themselves, the architecture of the UCS X-Series is modular and adaptable that makes it agile and ready for future technology disruptions. The modularity and agility make the X-Series a “programmable” platform that can provide the benefits of blade servers and the flexibility of rack servers, all in one platform. We call this the “unboxing” of the future as form factors and hardware options will no longer be the driving criteria for computing platforms.

In a way, the combination of the X-Series and Intersight is an embodiment of the larger convergence that is bringing the infrastructure operations and the app architecture worlds together. The benefits of this tight integration between the X-Series and Intersight are many that customers will experience as X-Series evolution continues. Here are three examples:

  • Achieving end-to-end automation to support ‘any’ workloads with auto-configuration and deployment to dramatically reduce time to app and TCO
  • Infrastructure operations that can intelligently map user intent to operational goals (e.g., automation, AI/ML workloads, Big Data workloads, and more)
  • UCS Manager customers can migrate their existing policies to Intersight and apply them at a greater scale across their enterprise

Are You Ready to Unbox Your Future?

I started this blog with a movie reference, so I’ll end with one. In the closing scene of  “A River Runs Through It” the narrator says, “eventually, all things merge into one, and then a river runs through it.”  I don’t know about everything, but I hope I made the case that this may certainly be true for compute operations and modern cloud-native app architectures.

We are well underway for the full effects of this convergence to take hold. How ready are you for the unboxing of your future?  Find out more on the Cisco UCS X-Series, then definitely check out the Cisco Insider Series for Cloud webinar.


Watch a short video about the Cisco UCS X-Series



Vikas Ratna

Director, Product Management

Cisco Cloud & Compute