Geodesic matting: A framework for fast interactive image and video segmentation and matting

Research output: Contribution to journalArticlepeer-review

235 Scopus citations

Abstract

An interactive framework for soft segmentation and matting of natural images and videos is presented in this paper. The proposed technique is based on the optimal, linear time, computation of weighted geodesic distances to user-provided scribbles, from which the whole data is automatically segmented. The weights are based on spatial and/or temporal gradients, considering the statistics of the pixels scribbled by the user, without explicit optical flow or any advanced and often computationally expensive feature detectors. These could be naturally added to the proposed framework as well if desired, in the form of weights in the geodesic distances. An automatic localized refinement step follows this fast segmentation in order to further improve the results and accurately compute the corresponding matte function. Additional constraints into the distance definition permit to efficiently handle occlusions such as people or objects crossing each other in a video sequence. The presentation of the framework is complemented with numerous and diverse examples, including extraction of moving foreground from dynamic background in video, natural and 3D medical images, and comparisons with the recent literature.

Original languageEnglish (US)
Pages (from-to)113-132
Number of pages20
JournalInternational Journal of Computer Vision
Volume82
Issue number2
DOIs
StatePublished - Apr 2009
Externally publishedYes

All Science Journal Classification (ASJC) codes

  • Software
  • Computer Vision and Pattern Recognition
  • Artificial Intelligence

Keywords

  • Fast algorithms
  • Geodesic computations
  • Interactive image and video segmentation
  • Matting
  • User-provided scribbles
  • Weighted distance functions

Fingerprint

Dive into the research topics of 'Geodesic matting: A framework for fast interactive image and video segmentation and matting'. Together they form a unique fingerprint.

Cite this