Multiscale shape and detail enhancement from multi-light image collections

Raanan Fattal, Maneesh Agrawala, Szymon Rusinkiewicz

Research output: Contribution to conferencePaperpeer-review

100 Scopus citations

Abstract

We present a new image-based technique for enhancing the shape and surface details of an object. The input to our system is a small set of photographs taken from a fixed viewpoint, but under varying lighting conditions. For each image we compute a multiscale decomposition based on the bilateral filter and then reconstruct an enhanced image that combines detail information at each scale across all the input images. Our approach does not require any information about light source positions, or camera calibration, and can produce good results with 3 to 5 input images. In addition our system provides a few high-level parameters for controlling the amount of enhancement and does not require pixel-level user input. We show that the bilateral filter is a good choice for our multiscale algorithm because it avoids the halo artifacts commonly associated with the traditional Laplacian image pyramid. We also develop a new scheme for computing our multiscale bilateral decomposition that is simple to implement, fast O(N 2 log N) and accurate.

Original languageEnglish (US)
DOIs
StatePublished - 2007
Event34th Annual Meeting of the Association for Computing Machinery's Special Interest Group on Graphics - San Diego, CA, United States
Duration: Aug 5 2007Aug 9 2007

Other

Other34th Annual Meeting of the Association for Computing Machinery's Special Interest Group on Graphics
Country/TerritoryUnited States
CitySan Diego, CA
Period8/5/078/9/07

All Science Journal Classification (ASJC) codes

  • General Computer Science

Keywords

  • Bilateral filter
  • Image enhancement
  • Multiscale image processing
  • NPR
  • Relighting
  • Shape depiction

Fingerprint

Dive into the research topics of 'Multiscale shape and detail enhancement from multi-light image collections'. Together they form a unique fingerprint.

Cite this