Learning Rank-1 Diffractive Optics for Single-Shot High Dynamic Range Imaging

Qilin Sun, Ethan Tseng, Qiang Fu, Wolfgang Heidrich, Felix Heide

Research output: Contribution to journalConference articlepeer-review

65 Scopus citations


High-dynamic range (HDR) imaging is an essential imaging modality for a wide range of applications in uncontrolled environments, including autonomous driving, robotics, and mobile phone cameras. However, existing HDR techniques in commodity devices struggle with dynamic scenes due to multi-shot acquisition and post-processing time, e.g. mobile phone burst photography, making such approaches unsuitable for real-Time applications. In this work, we propose a method for snapshot HDR imaging by learning an optical HDR encoding in a single image which maps saturated highlights into neighboring unsaturated areas using a diffractive optical element (DOE). We propose a novel rank-1 parameterization of the proposed DOE which avoids vast trainable parameters and keeps high frequencies' encoding compared with conventional end-To-end design methods. We further propose a reconstruction network tailored to this rank-1 parametrization for recovery of clipped information from the encoded measurements. The proposed end-To-end framework is validated through simulation and real-world experiments and improves the PSNR by more than 7 dB over state-of-The-Art end-To-end designs.

Original languageEnglish (US)
Article number9157825
Pages (from-to)1383-1393
Number of pages11
JournalProceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition
StatePublished - 2020
Event2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2020 - Virtual, Online, United States
Duration: Jun 14 2020Jun 19 2020

All Science Journal Classification (ASJC) codes

  • Software
  • Computer Vision and Pattern Recognition


Dive into the research topics of 'Learning Rank-1 Diffractive Optics for Single-Shot High Dynamic Range Imaging'. Together they form a unique fingerprint.

Cite this