Field-to-frame transcoding with spatial and temporal downsampling

Susie J. Wee, John G. Apostolopoulos, Nick Feamster

Research output: Contribution to conferencePaperpeer-review

37 Scopus citations

Abstract

We present an algorithm for transcoding high-rate compressed bitstreams containing field-coded interlaced video to lower-rate compressed bitstreams containing frame-coded progressive video. We focus on MPEG-2 to H.263 transcoding, however these results can be extended to other lower-rate video compression standards including MPEG-4 simple profile and MPEG-1. A conventional approach to the transcoding problem involves decoding the input bitstream, spatially and temporally downsampling the decoded frames, and re-encoding the result. The proposed transcoder achieves improved performance by exploiting the details of the MPEG-2 and H.263 compression standards when performing interlaced to progressive (or field to frame) conversion with spatial downsampling and frame-rate reduction. The transcoder reduces the MPEG-2 decoding requirements by temporally downsampling the data at the bitstream level and reduces the H.263 encoding requirements by largely bypassing H.263 motion estimation by reusing the motion vectors and coding modes given in the input bitstream. In software implementations, the proposed approach achieved a 5x speedup over the conventional approach with only a 0.3 and 0.5 dB loss in PSNR for the Carousel and Bus sequences.

Original languageEnglish (US)
Pages271-275
Number of pages5
StatePublished - 1999
EventInternational Conference on Image Processing (ICIP'99) - Kobe, Jpn
Duration: Oct 24 1999Oct 28 1999

Other

OtherInternational Conference on Image Processing (ICIP'99)
CityKobe, Jpn
Period10/24/9910/28/99

All Science Journal Classification (ASJC) codes

  • Hardware and Architecture
  • Computer Vision and Pattern Recognition
  • Electrical and Electronic Engineering

Fingerprint

Dive into the research topics of 'Field-to-frame transcoding with spatial and temporal downsampling'. Together they form a unique fingerprint.

Cite this