TY - GEN
T1 - Focal sweep imaging with multi-focal diffractive optics
AU - Peng, Yifan
AU - Dun, Xiong
AU - Sun, Qilin
AU - Heide, Felix
AU - Heidrich, Wolfgang
N1 - Publisher Copyright:
© 2018 IEEE.
PY - 2018/5/29
Y1 - 2018/5/29
N2 - Depth-dependent defocus results in a limited depth-of-field in consumer-level cameras. Computational imaging provides alternative solutions to resolve all-in-focus images with the assistance of designed optics and algorithms. In this work, we extend the concept of focal sweep from refractive optics to diffractive optics, where we fuse multiple focal powers onto one single element. In contrast to state-of-the-art sweep models, ours can generate better-conditioned point spread function (PSF) distributions along the expected depth range with drastically shortened (40%) sweep distance. Further by encoding axially asymmetric PSFs subject to color channels, and then sharing sharp information across channels, we preserve details as well as color fidelity. We prototype two diffractive imaging systems that work in the monochromatic and RGB color domain. Experimental results indicate that the depth-of-field can be significantly extended with fewer artifacts remaining after the deconvolution.
AB - Depth-dependent defocus results in a limited depth-of-field in consumer-level cameras. Computational imaging provides alternative solutions to resolve all-in-focus images with the assistance of designed optics and algorithms. In this work, we extend the concept of focal sweep from refractive optics to diffractive optics, where we fuse multiple focal powers onto one single element. In contrast to state-of-the-art sweep models, ours can generate better-conditioned point spread function (PSF) distributions along the expected depth range with drastically shortened (40%) sweep distance. Further by encoding axially asymmetric PSFs subject to color channels, and then sharing sharp information across channels, we preserve details as well as color fidelity. We prototype two diffractive imaging systems that work in the monochromatic and RGB color domain. Experimental results indicate that the depth-of-field can be significantly extended with fewer artifacts remaining after the deconvolution.
UR - http://www.scopus.com/inward/record.url?scp=85048870368&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85048870368&partnerID=8YFLogxK
U2 - 10.1109/ICCPHOT.2018.8368469
DO - 10.1109/ICCPHOT.2018.8368469
M3 - Conference contribution
AN - SCOPUS:85048870368
T3 - IEEE International Conference on Computational Photography, ICCP 2018
SP - 1
EP - 8
BT - IEEE International Conference on Computational Photography, ICCP 2018
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 2018 IEEE International Conference on Computational Photography, ICCP 2018
Y2 - 4 May 2018 through 6 May 2018
ER -