Content providers (“providers”) and communication service providers (CSPs) are experiencing a transformation from broadcast television to internet TV. The Cisco Annual Internet Report for 2018 – 2023 estimates that by 2023, a dominant portion of internet traffic could be streaming video. Fueling this transformation are providers that want to produce tailored content for their subscribers, allowing them to differentiate themselves among the competition and win viewership wars.
Additionally, the growth in internet connected devices from smart TVs to smartphones permits consumers to watch whatever they want, whenever they want, wherever they are in the world. While new streaming services and devices give consumers more content choices and opportunity to stream, with current content delivery network (CDN) designs, content providers and CSPs have very little control over the quality of the streaming experience, and this could have a negative impact on their end users. Furthermore, current commercial CDNs don’t collaborate with CSPs to optimize traffic delivery, improve quality of experience, or share in the monetization of content delivery services for publishers.
For providers, their biggest concern is the quality of experience consuming their content. Buffering, pixelation, long downloads, and delays or glitches in a live stream will ultimately cause subscribers to abandon viewing and perhaps even switch to another subscription service. In a December 2021 study by Kantar, 85% of U.S. households have a streaming service and Kantar found that the average U.S. household has 4.7 streaming subscriptions which means if they perceive an issue with one service, they’ll easily find another one that offers better quality. Providers obviously find this frustrating because they don’t control their stream on the underlying transport networks. High or variable latency, for example, can undermine streaming quality through congestion, demand spikes, and distance. For CSPs, this can be frustrating because they’re blamed for the poor experience, but often don’t have enough control over the traffic to improve it.
- Congestion: Network congestion acts like a traffic jam on the freeway slowing everything down. If the commercial CDN provider and underlying CSP aren’t using prioritized traffic controls like segment routing on network slices, then congestion causes IP packets to route sub-optimally from source to their end destination. This longer transit time results in poor streaming experiences.
- Demand Spikes: Like congestion, a demand spike on a content platform or service provider network can slow down content packets. A demand spike on a regional content server will cause the server to slow down in processing requests, resulting in an error message, or buffering wheel being presented to the end user.
- Transport Distance: The distance between the end user and the source content plays a critical role in the streaming experience. As distance increases, so does transit time and the potential for congestion or other network-impacting events increases. Especially for live stream events like sports matches or concerts, transit time can cause latency which will ruin the consumer’s experience.
The current commercial CDN model can’t effectively mitigate these problems for content delivery. It’s impractical and costly for CSPs to continually augment transport routes with more capacity. Building a big pipe and then underutilizing it is wasteful and only serves as a band-aide because if there’s a big enough demand spike, or critical network outage, then the extra capacity will be consumed, and congestion will occur.
Content providers and CSPs could work to increase their connection to commercial CDN peering points, but the commercial CDNs don’t want to over build their network or increase the complexity of managing all those peering points. Additionally, an increase in peering points won’t necessarily overcome the distance or congestion issues that create poor user experiences.
A better solution is to change the current content provider -> commercial CDN -> communication service provider architecture and operational model. Elevating the CSP to play a larger role in the solution can help reduce or eliminate the current struggles. Bringing the content into the CSP network and replicating it across a distributed data center design where the data center is not a giant aggregation system, but instead a local market access point for the content will reduce network strain.
In this design, the CSPs take on the role of CDN. They would build content servers throughout their network, closer to end consumers than ever before, which can eliminate the network conditions that cause latency and poor experiences. Additionally, in this design CSPs can enable more granular traffic controls to help ensure proper prioritization of the streaming traffic. For content providers, this design increases content replication at end storage points which creates fewer network choke points and helps end consumers have great experiences with a provider’s tailored content.
Cisco and Qwilt have developed the Edge Cloud for Content Delivery solution based on open caching to help providers and CSPs improve the CDN relationship. Using Qwilt’s software, content providers can operate with one API to update all the distributed content servers within the CSP partner networks. Qwilt has built relationships with major content providers like Disney and the edge cloud solution has been building momentum, helping to form recent partnerships with TIM Brazil, Windstream, and Airtel. This solution was designed to help remove inefficiencies for content delivery, but with the coming wave of metaverse, volumetric streaming, and other forthcoming applications, this design could prove to be the standard to assist those immersive experiences as well.
Stay tuned for our next blog on how the edge cloud solution can evolve in support of these new experiences, and for a more in-depth discussion on the benefits provided by open caching.
Most streamers from Youtube to Netflix have deployed cache servers in CSP networks for a while now, but these are mostly deployed in data-centers of the CSPs. The edge routers or PEs are usually not more than an MPLS hop or two from the data center devices. The strategy that demands deployment of CDNs on edge locations actually would cost CSPs much greater than any capacity upgrade requirements. Also, the CDN deployment also relies on the content provider and their device capabilities. It’s a decision that has to taken even the size of the network and of area the CSP serves into consideration. Slicing in nascent stages is an extension of traffic engineering and has to co exist with existing network QoS design. The granularity to differentiate specific destination traffic for slicing to kick in needs telemetry data to be analyzed and real time traffic manipulation from the end user device. This calls for automation and AI/ ML enabled telemetry. It has to be a comprehensive, synergistic approach to network planning and architecture if the CSPs are to deploy a sustainable network model that caters to modern day data consumption requirements, that doesn’t eat into their budgets.