Higher quality in video is obtained by using higher resolutions, more colors (increased bits per pixel), spatial audio (multiple audio channels and higher sampling rates), and multiple displays. All of these parameters increase demand for bandwidth — in turn increasing the sensitivity to degraded network conditions.
With video, when the impairments become apparent, the experience of the session deteriorates very quickly. Users are easily disturbed by poor video quality — and the bandwidth burden of video means that even slight deterioration of services within the network can significantly affect the video experience. Similarly, with video, the accompanying audio experience must be satisfactory and synchronization with the video must be consistent — and even more stringent requirement.
We are increasingly engaging in multistream interactions (multiple audio and video streams combined to form a single immersive experience). For example, Cisco TelePresence meetings consist of multiple HD video and audio streams combined to deliver the illusion of a shared space. Cisco WebEx conferencing also can include multiple participants on webcams and telephones. These multistream interactions place additional demands on the network, because all streams now must be handled as one to ensure a consistent experience (they need to stay perfectly synchronized — not only between audio and video, but also between the multiple video streams and multiple audio streams). All streams must take the same path through the network and must be given the same priority in order to avoid problems such as lip-synchronization errors (when audio is out of phase with video).
It is no longer possible to prioritize network traffic solely by media type (video first, audio second, and then text applications); all streams of the same interaction must be handled with the same quality-of-service (QoS) guarantees. The cost of not considering a media-enabled network will be poor quality for not just some but all of the increasing number of applications.
Assuring quality extends beyond the provisioning of bandwidth to provide the necessary intelligent network services. The network, management tools and the end systems must also be able to monitor themselves and report on problems within the network, as well as on the endpoints. By understanding the media stream and its requirements, they can create tolerance thresholds for the QoE such that beyond the threshold, the application and network can decide to reroute around the failure (if there is a network failure), suspend the session, or lower the quality expectations of users. Monitoring is essential to maximize the efficiency of the finite network resources, enabling the network to dynamically adapt to prevailing conditions without the need to wastefully overprovision — a common inefficiency in many video deployments.
As video continues to be deployed within the enterprise, the network operators will have to face and solve these challenges as they have with voice deployments. Application operators are concerned about delivering a predictable, high quality video experience to their end users. Network operators are concerned not only about video applications but all the applications that run on top of the network. The network operator has to balance the demand for predictable high quality with delivering the right performance for each application. However, with proper planning and tools this experience can be a positive one that brings together application and network requirements.
This is the last part of the three-part post. Part 1 discussed predictability. Part 2 discussed performance. Don’t forget to leave a comment and let me know what you think.