Content-aware distortion-fair video streaming in networks

Ying Li, Zhu Li, Mung Chiang, A. Robert Calderbank

Research output: Chapter in Book/Report/Conference proceedingConference contribution

5 Scopus citations


Internet is experiencing an explosive growth of video traffic. Given the limited network bandwidth resources, how to provide Internet users with good video playback quality is a key problem. For video clips competing bandwidth, we propose an approach of Content-Aware distortion-Fair (CAF) video delivery scheme, which is assumed to be aware of the characteristics of video frames and ensures max-min distortion fair sharing among video flows. Different from bandwidth fair sharing, CAF targets video playback quality fairness for the reason that users care about video quality rather than bandwidth. The proposed CAF approach does not need an analytical rate-distortion function which is difficult to estimate, but instead, it uses the explicit distortion of every frame which is induced by frame drop. Our CAF approach is fast and practical with content-aware cooperation. Experimental results show that the proposed approach yields better quality of service when the network is congested compared with the approach not rate-distortion optimized, and it makes competing video clips help each other to get fair playback quality.

Original languageEnglish (US)
Title of host publication2008 IEEE Global Telecommunications Conference, GLOBECOM 2008
Number of pages6
StatePublished - 2008
Event2008 IEEE Global Telecommunications Conference, GLOBECOM 2008 - New Orleans, LA, United States
Duration: Nov 30 2008Dec 4 2008

Publication series

NameGLOBECOM - IEEE Global Telecommunications Conference


Other2008 IEEE Global Telecommunications Conference, GLOBECOM 2008
Country/TerritoryUnited States
CityNew Orleans, LA

All Science Journal Classification (ASJC) codes

  • Electrical and Electronic Engineering


Dive into the research topics of 'Content-aware distortion-fair video streaming in networks'. Together they form a unique fingerprint.

Cite this