The way we look at our IT ecosystems today has changed. We no longer manage infrastructure from the data center out, and just because all systems are “green” doesn’t mean they’re working properly. Applications were large monolithic entities and upgrade planning cycles alone took months, and then when we did apply an upgrade, everyone was on call “just in-case.” For the most part, we were able to manage this ecosystem with a few good tools across network, server, databases, security, and log analytics for forensic purposes. The volume of alerts was manageable, and we created filters and rules to only surface the events that we thought would be meaningful.
All of this became standard operating procedure, and organizations outsourced or out-tasked certain areas of the ecosystem, as the focus started to shift to the applications that provided services to the business. The underlying infrastructure just needed to be available to meet the demands of the applications. Availability, KPIs and SLAs became the rule. Around this time a new technology began to surface, Application Performance Management or APM. APMs were looking under the hood at how an application was performing, and created a massive number of new metrics – including uncovering anomalies that no human could reasonably process in real-time. All this data became the realm of the application team, developers, and data analysts. They were called upon to make sense of the data and hopefully correlate it to create actionable items.
Today’s application infrastructure is more distributed
Today, we need to understand the customer’s application experience from wherever they might be. The data center is the cloud, whether that be private, public or hybrid. We still have monolethic applications, but they are moving to more distributed cloud-native environments at a faster-than-ever pace and an application is no longer a single all-encompassing entity. Businesses rely on a host of third-party services and SaaS applications to process the day-to-day requirements of their digital business.
Now we have metrics, events, logs, and traces (MELT) being generated at unheard of volumes from all aspects of the ecosystem: network, infrastructure, cloud, apps, security, storage, user experience, etc. This is where full stack observability can help.
As I work with managed service providers around the world to create scalable offers that address this tsunami of MELT data, we begin with a few simple questions about observability:
- How is your customer’s journey to observability going?
- What are your and your customer’s expectations of all this observability data?
- Do you have an observability roadmap?
- What are the greatest challenges that customers face in maximizing observability?
Getting started means understanding the value of full stack observability
I’ve learned that customers are keen to understand what observability (o11y) can do for their business and how it will allow them to enhance their customer experience. They are not sure of the best way to achieve the desired outcome, and the resources to make it all happen are scarce. As a matter of fact, according to AppDynamics “Journey to Observability” – a 2022 survey of more than 1,200 IT professionals found that 75% were challenged finding o11y experts. Customers are focused on digital transformation and seeking the best way to achieve results with the least amount of impact on operations, and the greatest impression on customer experience. They fear that their investment will go unrealized and are looking for trusted advisors to guide them on their journey.
Some companies are choosing to hire third-party organizations to assess and transform their application landscape while some are confident in their ability to keep it in-house. In either scenario organizations require some form of o11y to be successful. Solutions like Cisco’s Full-Stack Observability provide best in class capabilities that integrate AppDynamics and ThousandEyes to deliver on meaningful o11y data that goes beyond just MELT and correlates to business outcomes.
It’s all about driving business outcomes
Customers are looking for predictable costs and meaningful outcomes that go beyond being just another tool and deliver real value and insights to their business. This can be achieved by partnering with skilled MSPs that are investing in the journey to FSOaaS today to achieve scale that benefits all customers. Consider the possibility when you add the power of an open extensible platform that ingests OTel data. The power of data correlation in a business context becomes a more powerful business outcome. However, to move at the speed of business, customers require not only the technology but also the people and best practices to make it a reality.
The challenge is how to achieve these outcomes in the most effective manner. Bespoke solutions are not efficient at achieving scale and can be costly. The evolution towards FSOaaS is happening today. The scale that an as-a-Service model affords with a rapid time to value supports the needs of the business, IT, DevOps, and SRE teams all at once. Leading Managed Service Providers (MSPs) are now offering enhanced Managed FSO capabilities that are modular in nature and flexible in providing services needed to complement a customer’s current skills and mode of operation.
The key to any Managed FSO offer is how well the elements of the offer integrate in an open extensible manner based on standardized models, procedures, and methodologies. The time to value needs to match a customer’s drive towards digital transformation allowing them to mitigate risks involved in the transformation, achieve greater predictability, and deliver unprecedented outcomes.
It’s time to consider a partner that is committed to helping customers achieve their desired outcomes.