Avatar

Content publishers and communication service providers (CSPs) are experiencing a transformation from broadcast television to streamed content from the internet. Fueling this transformation is competition from new providers that want to produce tailored content for their subscribers, allowing them to differentiate themselves among the competition and win viewership attention. And up to now, this streaming content has been mostly video and not bi-directional, but that will soon change.

The metaverse and applications like remote surgery or drone delivery services are on the horizon but can’t arrive while the network still struggles with common problems like asymmetrical bandwidth speeds, scalability, and variable latency conditions resulting from congestion and transport distance. Buffering and pixelization are bad during a video stream, but those delays may prove deadly during remote surgery or autonomous driving. Therefore, CSPs, content publishers, and other players within the infrastructure ecosystem need to evolve from traditional content delivery network (CDN) architecture and move compute power to the edge near the consumption point.

Visualize an evolved metaverse or immersive application this way: instead of jumping on a treadmill or spin bike and joining an online exercise class displayed on a screen, you put on a virtual reality (VR) headset and meet class participants first in a virtual gym room. You could engage other participants in conversation, high fives, and have a more three-dimensional, immersive experience. The experience would show you passing other participants, a changing landscape to mimic the actual physical environment, and sights and sounds along the way.

Expanding on this even further, your local store could have a virtual store front where you shop for items that are then sent home via a drone delivery service. Utilizing a ‘virtual proximity’ algorithm, the store front and personnel avatars would be localized to represent your nearby store. This way the application can expand upon the sense of community and convenience you feel by shopping locally and engaging with the same personnel that you see when you visit in person.

These realistic, immersive experiences are what providers want to deliver as they’re more engaging, more authentic, and will create new markets that can drive new revenue streams. For this to become reality, providers need greater quality control within CSP transport networks as well as to have content and any artificial intelligence (AI) or machine learning (ML) enabled contributions located deeper into the network, closer to their end users. By having this control and access, providers can have assurances that the network will supply the quality of experience subscribers expect. And for critical decision services like autonomous driving or flying drone action points, the compute power must be in the market to avoid disastrous outcomes.

Latency can be overcome by shortening distances and moving content as close as possible to end consumers—dropping the distance traveled from peering points and reducing the likelihood of encountering congestion and avoiding the cost associated with transporting traffic. Adding compute power to the same edge location means the CSP is creating a localized intelligent node that is capable of massive throughput supporting millions of simultaneous stream connections while not adding complexity to network management or operations.

These intelligent node deployments need to be easy to manage, economical, scalable, and sustainable. To maintain the simplicity in design, the economic feasibility and sustainability of the systems that are being put in place need to be leading edge with throughput capacity, adaptive to fluctuating traffic demands, flexible in deployment options, and rack, power, and space efficient. Recent announcements from Cisco supporting disaggregated data center designs for web scalers support these efforts to create more intelligent nodes in support of a more content-rich network.

The design for these nodes needs to include the compute power to serve the in-demand applications, but the server counts don’t need to be so large as to off-set the economics for the location or potential positive environmental impacts. To help keep the design and deployments streamlined, both content providers and CSPs need deep network observability to identify the tangible performance numbers and affecting factors. With tools such as Thousand Eyes or Crosswork Network Insights that can provide full-stack observability and a level of detail, workload distribution could become a hybrid deployment between the edge or larger aggregation computing locations. This could be a deterministic deployment model where application workloads are centrally located in large cloud centers when the workloads demand large compute power but have a higher latency tolerance. Conversely, applications with a lighter computing need that have lower latency requirements would be located at the edge to optimize their performance.

The deterministic workload deployments, along with improved quality of service parameters deployed through the network, will create a network design to serve as the foundation for immersive experiences that can be localized to foster community building and create connections that build an inclusive future for all.

We encourage you to learn more about the technologies and solutions discussed in this blog, such as edge cloud for content delivery and our mass-scale infrastructure for cloud companies information. Also be sure to check out our CDN success with Qwilt.

 

 



Authors

Stephane Ribot

Director, Business Development

Edge Cloud Services