Avatar

Time to take gaming seriously

Video gaming is huge, by any measure you choose. By revenue, it’s expected to be more than $150 billion in 2019, making it bigger than movies, TV and digital music, with a strong 9% CAGR. 

And it’s not just teenagers. Two-thirds of adult Americans — your paying customers — call themselves gamers. 

This makes gaming one of the biggest opportunities you face as a cable provider today. But how can you win those gamers over and generate revenue from them? 

New demands on the network

A person’s overall online gaming experience depends on lots of factors, from their client device’s performance through to the GPUs in the data center. But the network — your network — clearly plays a critical role. And gaming related traffic is already growing on your networks. Cisco’s VNI predicts that by 2022, gaming will make up 15% of all internet traffic. 

When Gamers talk about “lag” affecting their play, saying they care about milliseconds of latency, they really mean overall performance – latency, jitter, drops. Notice how latency changes the gamer’s position (red vs blue) in the below screenshot:

 

Many would even pay a premium not just for a lower ping time, but also for a more stable one, with less jitter and no dropped packets. Deterministic SLAs become key. 

But latency, jitter and drops aren’t the only factors here. Gamers also need tremendous bandwidth, especially for:

  • Downloading games (and subsequent patches) after the purchase. Many games can exceed 100 GB in size!
  • Watching others’ video gameplay on YouTube or Twitch. The most popular gamer on YouTube has nearly 35 million subscribers!
  • Playing the games using cloud gaming services such as the upcoming Google Stadia. 4K cloud gaming could require around 15 GB per hour, twice that of a Netflix 4K movie.

In many cases, the upstream is as important as downstream bandwidth — an opportunity for cable ISPs to differentiate themselves on all those factors with end-to-end innovations.

Your chance to lead

As a cable ISP, you’re the first port of call for gamers looking for a better experience. You can earn their loyalty with enhanced Quality of Experience and even drive new premium service revenue from it. 

There’s opportunity for you to be creative, forging new partnerships with gaming providers, hosting gaming servers in your facilities, and even providing premium SLAs for your gaming customers along with new service plans. 

But there’s plenty to be done in the network to make these opportunities real. 

At the SCTE Expo, we will be discussing specific recommendations for each network domain. To give a teaser, in access network domain, you need to take action to reduce congestion and increase data rate, setting up prioritized service flows for gaming to assure QoS. New technologies like Low Latency DOCSIS (LLD) will be critical for delivering the performance your customers want, optimizing traffic flows and potentially delivering sub-1ms latency without any need to overhaul your HFC network infrastructure itself. In the peering domain, you need to … OK, let’s save that for the live discussion. We will be happy to help on all those fronts. 

The cable edge is your competitive edge

Gaming is not the only low-latency use case in town. For example, Mobile Xhaul (including CBRS) and IoT applications depend on ultra-responsive and reliable network connectivity between nodes. And there are plenty of other use cases beyond gaming that are putting new strains on pure capacity, including video and CDNs. 

All of these use cases too will benefit from the DOCSIS optimization, but it’s only a part of the solution.

IT companies of all shapes and sizes are recognizing that for many of these use cases, putting application compute closer to the customers, at the edge, is the only way forward.  

After all, the best way to reduce latency (and offer better experience) is to cut the route length by hosting application workloads as geographically and topologically close to the customers as possible. This approach also reduces the need for high network bandwidth capacity to the centralized data centers for bandwidth-heavy applications like video and CDN. 

Imagine a gaming server colocated in a hub giving local players less than 10ms latency/jitter. Or a machine-vision application that monitors surveillance camera footage for alerts right at the edge, eliminating the need to send the whole livestream back to a central data center. The possibilities are endless. 

Expand your hub sites into application cloud edge

In the edge world, your real differentiator becomes the thousands of hub sites that you use to house your CMTS/CCAP, EQAM and other access equipment — sites that SaaS companies and IT startups simply can’t replicate. Far from being a liability to shed and consolidate, this distributed infrastructure is one of your critical advantages.  

By expanding the role of your hub sites into application cloud edge sites, you can increase utilization of your existing infrastructure (for example, cloud-native CCAP), and generate revenue (for example, hosting B2B applications), both by innovating new services of your own and by giving third-party service providers access to geographic proximity to their B2B and B2C users.  

If you’re also a mobile operator, this model also allows you to move many virtualized RAN functions for into your hub sites, leaving a streamlined set of functions on the cell site itself (this edge cloud model is one that Rakuten is using for its 5G-ready rollout, across 4,000 edge sites). 

Making cable edge compute happen

We’ve introduced the concept of Cable Edge Compute, describing how you can turn your hubs into next generation application-centric cloud sites to capture this wide-ranging opportunity. 

While edge compute architectures do present a number of challenges — from physical space, power & cooling constraints to extra management complexity and new investment in equipments — these are all solvable with the optimal innovations, right design and management approaches. It’s vital to approach an initiative like this with an end-to-end service mindset, looking at topics like assurance, orchestration and scalability from the start.

A four-layer model for cable SP edge compute

Four key ingredients for cable edge compute

Here are essentially four key ingredients for a cost-optimized cable edge compute architecture: 

  1. Edge hardware: takes the form of standardized, common SKU modular pods with x86 compute nodes and top-of-rack switches, with optional GPUs and other specialized processing acceleration for specific applications. Modularity, Consistency and flexibility are key here, so as to be able to scale easily. 
  2. Software stack: enables the Edge hardware to optimally host a wide range of virtualized applications in containers or VMs or both, whether managed through Kubernetes or Openstack or something else. What’s important is to minimize the x86 CPU cores usage by the software stack and provide deterministic performance. Cisco has made it possible by combining cloud controller nodes with the compute nodes at the edge, but moving storage nodes and management nodes to the remote clusters with specific optimization and security. This optimizes the usage of physical space and power in the Hub site.  
  3. Network Fabric: provides ultra-fast connectivity for the application workloads to communicate with each other and consumers. A one- or two-tier programmable network fabric based on 10GE/40GE/100GE/400GE Ethernet transport with end-to-end IPv6 and BGP-based VPN. 
  4. And finally, this infrastructure model depends totally on SDN with automation, orchestration and assurance. Configuration and provisioning must be possible remotely via intent files, for example. At this scale, with an architecture this distributed, tasks should be zero-touch across the lifecycle. Assurance is utterly foundational, both to assure appropriate per-application SLAs and the enforcement of policy around prioritization, security and privacy. 

Automation, orchestration and assurance in edge domains

Discover your opportunity with low-latency and edge compute

In the new world of low-latency apps delivered through the edge, cable SPs are in a great position.  

And there’s never been a better opportunity to learn more about what this future holds. Cisco CX is presenting at SCTE Cable-Tec Expo on the gaming opportunity and cable edge compute, and we’ve published two new white papers that you can consult as an SCTE member. 

Join us at Cable-Tec: 

Are you at Cable-Tec? Here’s how to learn more: 

  • Visit our booth, #1301, to talk with me and other Cisco experts about the opportunities you face. 
  • Attend the session, “Low Latency DOCSIS: Current State and Future Vision” Room: 243-244, Monday, September 30, 2019: 3:30 PM – 4:30 PM 
  • Attend the session, “Edge Computing in Converged Networks and 5G Fronthaul/Backhaul Opportunities” Room: 211-213, Wednesday, October 02, 2019: 9:00 AM – 10:00 AM 

Didn’t make it to New Orleans this year? Find out more about our solutions for cable providers here. Through our CX lifecycle services, we can help you every step of the way as you evolve to capture the opportunities presented by low-latency and edge compute.

 



Authors

Rajiv Asati

CTO & Cisco Fellow

CX