What better place than the International Consumer Electronics Show to shine the light on consumer experiences in the device landscape — and not just the sparkly, new, all-IP dandies, mind you — such that all devices are treated as first class citizens?
That’s part of what we will be demonstrating (at the Wynn Hotel,) this week in Vegas: A mixture of what we’re calling “smart streaming” and machine-level data, mined from across the video headend, network elements and end-devices. With it, service providers can deliver the highest possible video quality — securely, reliably, at scale, and without compromise — from encoder to screen.
Why? Because it’s our view that perhaps that greatest untapped source of competitive differentiation, in a saturated and fragmenting marketplace, is video quality. Let’s face it: Adding more HD (or even 4K) channels to a package (or slimming packages, for that matter) probably won’t do a whole lot to make consumers love you more.
This is even more important as video viewing shifts online; according to a recent survey from Limelight Networks, over 46% of respondents will stop watching an online video after the second time video buffers.
But what if you could simultaneously breathe new life into fielded, legacy set-tops, while at the same time increase the quality of streamed video and dramatically reduce latency on live streams delivered online?
This is what we mean when we say that our Infinite Video Platform, and the extensions we are building into it, creates a blended environment that qualitatively exceeds what’s possible in the individual “silos” of broadcast and OTT-delivered video.
Now flip to the existing streaming landscape. Up until now, the standard approach for adaptive bit rate (ABR) streaming has a client (phone, tablet, TV) request segments based on available bandwidth alone — and invariably ask for the largest segment, even if “bits are dropping on the floor.” In other words, no intelligence, no manners. And in a world of increasingly discriminating consumers, a leading cause of the two strikes and you’re out behavior (see above).
We are adding intelligence into online video. Machine level data captured from encoders, network elements and video clients that provides end-to-end visibility, which we can uniquely combine to deliver “smart streaming” solutions.
First, we used a video quality assessment score, and applied it to ABR encoded content on a segment-by-segment basis. The resultant quality data can be used as an input by network elements and end-devices to decide which segment to use. Suddenly, you have Smart ABR: clients that request segments based on the video quality score in addition to bit rate.
It’s an optimization technique that simultaneously saves bandwidth, and increases overall video quality delivered. In a fixed bandwidth environment: Dad’s watching the football game, Mom’s watching the news. Mom’s stream needs less bits (but still looks awesome), which can be applied to making the stream on the big screen look and sound pristine… Getting back to that football game.
If you’ve ever watched live sports on TV while streaming it on a social network or a TV Everywhere application, you’ve probably gotten a little (or a lot) blistered at the delay, between what you’re seeing on the TV, and what you’re seeing online. Anecdotal evidence suggests delays of up to a minute, and usually 30+ seconds. Ugh. Simply changing channels can be equally ugly, latency-wise.
That’s why I’m excited to report on our innovations from the output of the encoder, through the CDN, and to connected devices, that trim that latency down to … well, in less time than it takes to sing one stanza of the refrain of “Yellow Submarine.” So, yes, basic physics dictates the realities of transmission delays. But we have ways to make them as small — and un-irritating — as possible.
This is the kind of stuff that really is easier to explain if you’re looking at it live. So come by and see us! And Happy New Year!