Avatar

If someone had told me five months ago that the name of the game for cloud in 2020 would be “the ability to scale”, I would probably have thought they were 10 years late. Fast-forward to now, and it seems the biggest asset for organizations is indeed the ability to scale their digital selves to meet the tremendous demand of a world where their physical self is literally constrained by social-distancing mandates.

The dramatic, global migration to online business in 2020 has become THE global-scale massive stress-test for cloud computing and the novelties it introduced years back. Despite some initial hiccups related more to application design than infrastructure capacity, or customers getting anxious over availability, cloud services helped organizations meet the demand spike in early 2020[1]. Cloud equipped IT teams with the tools to not only elastically stretch resources or develop faster, but,  also scale down where required to preserve cost.

Indeed, for IT leaders, mid-term strategies might feel… oxymoronic. They need to balance pushing diligently towards digitally transforming to protect business operations, revenues and survive, while carefully prioritizing investments in times of great economic uncertainty. So what does the future look like in terms of cloud, now the stakes are even higher? 

Accelerating digitization further

The big picture of our multicloud reality was already fast-changing and complex, driven by an application revolution. With the environment that IT teams support comprising 50% more applications, new application types and modular development methodologies, the current spike in demand for online experiences acts as a multiplier. It hits organizations already struggling with complexity, from innovation scattered between core data center, edge and public clouds (58% outside of on-prem).

Today’s apps have increased interdependencies, touching heterogeneous systems and infrastructure, as well as different toolsets and processes. Many are likely built or delivered with a combination of public cloud services, chosen from around the approximately 1.4 million[2] (!!!) offered across the big public clouds.

And while the recent events have had a detrimental impact on global technology spending for 2020, the one category that is up is cloud-based IT infrastructure, according to IDC, as 1 in 3 organizations plan to spend more for app development[3]. The ensuing disruption is actually “seen as an accelerator”. This makes sense, as cloud is the big factory and delivery mechanism for today’s user experiences, at a time when the vast majority of experiences outside the household are in fact digital. According to Gartner[4], “by 2021, organizations with robust, scalable digital commerce will outperform noncommerce organizations by 30 percentage points in sales growth by better using digital channels during the COVID-19 outbreak.”

What is now added to an existing world of complexity is the demand to deliver additional scale and efficiency while embracing complete macro-uncertainty.

Rethinking cloud operations for the era of modularity

The reality is most IT functions and providers responded well. Cisco’s customers for example, were able to scale out physical data centers quickly to support and connect remote workers, securely provision hybrid infrastructure in minutes to increase application nodes, confidently increase utilization and sweat assets without impacting user experience etc. Ten or even five years ago, that would have likely been a different story.

The app experience is the non-negotiable KPI, so being able to control all corners of the infrastructure – on-premises and in public clouds – will be key to delivering resilience and assuring application performance at scale. That does not mean looking for the new solution that can “manage everything” or a one-size-fits-all approach, but on the contrary, driving efficiency with modular capabilities that can offer standardization across processes and teams supporting the individual initiatives that matter the most. After all, chances are that each application team will work on a unique set of requirements: from the existing on-prem system dependencies, to the preferred public clouds and tools, to the networking, security, governance and data regulatory requirements.

Critical capabilities

A great example of this is automating and building self-service capabilities via policy-based resource provisioning across existing (“legacy”) and new infrastructure on-prem or public clouds. This can reduce time-to-market and increase application (or business) velocity. Indeed, the ability to scale rapidly remains dependent on eliminating tickets for resource provisioning and standardizing different types of infrastructure and platforms across domains (compute, networking, security) via APIs. At least until we live in an “ideal” world, where everything is truly cloud-native and all software can scale up automatically.

Another facet of automation comes in the form of simplifying day-0 deployments. With applications becoming ever more abstracted from the infrastructure due to technologies such as Kubernetes, containers can become standard building blocks – useful when batch-deploying distributed, purpose-built, complex configuration stacks. This saves precious time IT operations teams would normally spend on manual tasks, especially now that bare metal containerization is en route to climbing the adoption curve. A significant contributor to simplification here, is being able to manage domains (data centers, clouds, networks etc) from control points in the cloud.

Finally, taking advantage of insights and automation across the full stack for application, not only leads to cost reduction via better use of resources, but also ensures a superior user experience. Correlating telemetry data from distributed production platforms to the user experience and customer journeys means that previously unconnected  infrastructure and application teams work better together to identify bottlenecks faster and stay ahead of problems.

The next phase of cloud

As we have been moving (slowly) from the “build” phase of cloud towards the phase of “consistency”, more  recognize “cloud” is much broader than a delivery framework or destination. In a future that requires more than ever before balancing carefully innovation with optimum efficiency via laser-focused prioritization of investments, opportunities do not necessarily lie in bold technology investments, or the “next big thing”. Rather, they are found in a more esoteric optimization that also includes people and process. The output will be establishing new, tailor-made operating models to align strategic objectives with technology profiles in times of uncertainty.

Stay tuned for updates in the upcoming months on innovation from our portfolios and how we are working with customers to optimize their cloud and applications initiatives and bring their IT teams together.

Resources

What are containers?

What is Kubernetes?

 


[1] IDC Link: Cloud Flattens the COVID-19 Dip But Has Room for More Improvement, Doc #lcUS46335220, May 2020

[2] 451 Research, part of S&P Global Market Intelligence, “Cloud Pricing Index: The Old Managed Service and the Sea”, February 2020 (link)

[3] IDC, Customer Perspective: The Impact of the COVID-19 Pandemic on IT Spend for Developer Tools and Cloud, Doc #US46207220, April 2020

[4] Gartner, Mitigate Coronavirus (COVID-19) Business Impacts With Digital Commerce, March 2020

 

 



Authors

Kostas Roungeris

Marketing Manager

Cloud Solutions, EMEAR