Avatar

Written by Ramin Farassat

More and more service providers are shifting toward cloud-delivered video, just as the video-heavy Consumer Electronics Show is once again at hand. CES is all about consumer experiences — mostly video, and increasingly driven from the cloud.

I want to share three examples of how we’re helping our customers to scale up. Specifically, I’ll focus on some efforts happening at Cisco, and with customers:

  1. Architecting to maneuver video workloads between public and private clouds
  2. Live video streaming
  3. Scalable recording.

So. Here goes. Example 1: Implementing a “cloud-ready” video strategy that will work on both private and public clouds. Generally speaking, a “private cloud” is architected and built “on premise,” at a service provider’s facility, to process, manage, monitor, store and stream video — whereas a “public cloud” is essentially rented time, from the likes of Amazon Web Services, Google, Microsoft Azure, and others.

Operators tend to want a “cloud-ready” architecture for video, so that they can cost effectively scale to meet demand, without having to pay for those “elasticity” resources when they’re not being used, and do so in a timely manner. They’re attracted to the flexibility that comes with having a private cloud architecture, which gives the benefits of scale and resource elasticity, with the ability to push/grow into the public cloud, for maximum resource elasticity. It’s the classic “pay as you go” strategy that’s been favored by service providers for decades.

Mostly, they really just want the video they distribute to work from a cloud. Could be that it starts out as a private cloud, and shifts toward a public cloud. It all depends on the use case. The point is, the benefit of a cloud architecture, is that compute, connectivity and storage resources can be spun up in either environment to fulfill the deliverable at hand — live streaming, recording, disaster recovery, and so on.

Example 2: Live video streaming from a public or private cloud.  One reason this is front-of-mind for service providers is speed to market, especially when it comes to spinning up new channels or services (think OTT-styled fare, and “long-tail” material).

As context, prior to the emergence of cloud-delivered video, launching a new channel or service was anything but speedy.  It involved racking and stacking hardware, often with disparate components for encoding, transcoding, encryption, and packaging. The recent shift to virtual machines, only shifted this effort to software, but left the operations essentially unchanged. By moving those video functions to cloud-based micro-services architectures, new channels can literally be launched in a matter of seconds.

Another key driver for live video streaming from the cloud is resiliency. When something goes wrong in traditional video distribution, shifting to cloud-based distribution is now a solid backup plan. Instead of the dreaded “dead air,” cloud-based redundancy enables a switchover to the exact same services, with a fraction of the cost and with microservices architectures, at dramatically reduced time. Operators tend to like this because it obviates the need to build a fully redundant facility, and can instead light it up only when needed, and only for the specific services which need it.

Example 3: Scalable recording in the cloud. Not news, in and of itself. Some of our service provider customers have been focused, for the past two or three years, on building out ways to provide the same DVR-type functionality consumers have at home, but in the cloud. They’re no longer in “test it out” mode — they’re deploying, to reach hundreds of thousands and millions of customers, who are recording shows, and want to stream them on many devices and at various locations. This requires flexible scaling, but also operational simplification — something the cloud was built to do.

By its very nature, cloud resources are used only when needed, and are built for elasticity: When something happens that suddenly everyone wants to see or record, it’s no longer such a stress on the system. Video clouds are built to obviate the very notion of “bottleneck,” as it relates to storage and playback.

The common denominators in these use cases: Containerization, and micro-services, both core elements in our Infinite Video Platform. In essence, both relate to the evolution of virtual machines (VMs), into even more individualized components. By that I mean that a VM tends to support a specific video function, in software — encoding, encrypting, packaging, and so on. Containerization and microservices enable just the specific service of that video function to be implemented.

That way, service providers can more rapidly launch a microservice for any specific video application, or string several microservices together, as needed, to support different use cases. This is what we have done by containerizing our technology, so that operators can launch, troubleshoot, or pull down the microservices supporting an individual channel (for instance) — without affecting other channels in a group. This makes operations not just less expensive (software vs. hardware) but also operationally simpler, and faster.

It’s a progression we learned when we moved the Infinite Video Platform onto Amazon Web Services (AWS) last year — and is equally portable to other public clouds, like Google Cloud and Microsoft Azure.

We’ll be demonstrating the Infinite Video Platform running in private and public clouds during CES, at the Wynn Hotel, including all three use cases described here.  If you or your colleagues are exploring how the cloud can redefine your video operations — let’s talk.



Authors

Yoav Schreiber

Marketing Manager

Service Provider Video Marketing