TY - JOUR
T1 - Learning Rank-1 Diffractive Optics for Single-Shot High Dynamic Range Imaging
AU - Sun, Qilin
AU - Tseng, Ethan
AU - Fu, Qiang
AU - Heidrich, Wolfgang
AU - Heide, Felix
N1 - Funding Information:
Conclusion We present a novel approach tackling the challenge of estimating HDR images from single-shot LDR captures. To this end, we propose a rank-1 DOE encoding of HDR content and a catered reconstruction network, which when jointly optimized allow for snapshot HDR captures that outperform previous state-of-the-art methods. Going forwards, we envision making snapshot HDR capture truly practical by extending our optical model to handle greater scene information, such as depth and multispectral data, as well as designing our algorithms for specialized hardware for low-power processing at the edge. Acknowledgements. This work was supported by KAUST baseline funding.
Publisher Copyright:
© 2020 IEEE.
PY - 2020
Y1 - 2020
N2 - High-dynamic range (HDR) imaging is an essential imaging modality for a wide range of applications in uncontrolled environments, including autonomous driving, robotics, and mobile phone cameras. However, existing HDR techniques in commodity devices struggle with dynamic scenes due to multi-shot acquisition and post-processing time, e.g. mobile phone burst photography, making such approaches unsuitable for real-Time applications. In this work, we propose a method for snapshot HDR imaging by learning an optical HDR encoding in a single image which maps saturated highlights into neighboring unsaturated areas using a diffractive optical element (DOE). We propose a novel rank-1 parameterization of the proposed DOE which avoids vast trainable parameters and keeps high frequencies' encoding compared with conventional end-To-end design methods. We further propose a reconstruction network tailored to this rank-1 parametrization for recovery of clipped information from the encoded measurements. The proposed end-To-end framework is validated through simulation and real-world experiments and improves the PSNR by more than 7 dB over state-of-The-Art end-To-end designs.
AB - High-dynamic range (HDR) imaging is an essential imaging modality for a wide range of applications in uncontrolled environments, including autonomous driving, robotics, and mobile phone cameras. However, existing HDR techniques in commodity devices struggle with dynamic scenes due to multi-shot acquisition and post-processing time, e.g. mobile phone burst photography, making such approaches unsuitable for real-Time applications. In this work, we propose a method for snapshot HDR imaging by learning an optical HDR encoding in a single image which maps saturated highlights into neighboring unsaturated areas using a diffractive optical element (DOE). We propose a novel rank-1 parameterization of the proposed DOE which avoids vast trainable parameters and keeps high frequencies' encoding compared with conventional end-To-end design methods. We further propose a reconstruction network tailored to this rank-1 parametrization for recovery of clipped information from the encoded measurements. The proposed end-To-end framework is validated through simulation and real-world experiments and improves the PSNR by more than 7 dB over state-of-The-Art end-To-end designs.
UR - http://www.scopus.com/inward/record.url?scp=85094810715&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85094810715&partnerID=8YFLogxK
U2 - 10.1109/CVPR42600.2020.00146
DO - 10.1109/CVPR42600.2020.00146
M3 - Conference article
AN - SCOPUS:85094810715
SN - 1063-6919
SP - 1383
EP - 1393
JO - Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition
JF - Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition
M1 - 9157825
T2 - 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2020
Y2 - 14 June 2020 through 19 June 2020
ER -