TY - GEN
T1 - Diffuse mirrors
T2 - 27th IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2014
AU - Heide, Felix
AU - Xiao, Lei
AU - Heidrich, Wolfgang
AU - Hullin, Matthias B.
N1 - Publisher Copyright:
© 2014 IEEE.
PY - 2014/9/24
Y1 - 2014/9/24
N2 - The functional difference between a diffuse wall and a mirror is well understood: one scatters back into all directions, and the other one preserves the directionality of reflected light. The temporal structure of the light, however, is left intact by both: assuming simple surface reflection, photons that arrive first are reflected first. In this paper, we exploit this insight to recover objects outside the line of sight from second-order diffuse reflections, effectively turning walls into mirrors. We formulate the reconstruction task as a linear inverse problem on the transient response of a scene, which we acquire using an affordable setup consisting of a modulated light source and a time-of-flight image sensor. By exploiting sparsity in the reconstruction domain, we achieve resolutions in the order of a few centimeters for object shape (depth and laterally) and albedo. Our method is robust to ambient light and works for large room-sized scenes. It is drastically faster and less expensive than previous approaches using femtosecond lasers and streak cameras, and does not require any moving parts.
AB - The functional difference between a diffuse wall and a mirror is well understood: one scatters back into all directions, and the other one preserves the directionality of reflected light. The temporal structure of the light, however, is left intact by both: assuming simple surface reflection, photons that arrive first are reflected first. In this paper, we exploit this insight to recover objects outside the line of sight from second-order diffuse reflections, effectively turning walls into mirrors. We formulate the reconstruction task as a linear inverse problem on the transient response of a scene, which we acquire using an affordable setup consisting of a modulated light source and a time-of-flight image sensor. By exploiting sparsity in the reconstruction domain, we achieve resolutions in the order of a few centimeters for object shape (depth and laterally) and albedo. Our method is robust to ambient light and works for large room-sized scenes. It is drastically faster and less expensive than previous approaches using femtosecond lasers and streak cameras, and does not require any moving parts.
UR - http://www.scopus.com/inward/record.url?scp=84911449835&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=84911449835&partnerID=8YFLogxK
U2 - 10.1109/CVPR.2014.418
DO - 10.1109/CVPR.2014.418
M3 - Conference contribution
AN - SCOPUS:84911449835
T3 - Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition
SP - 3222
EP - 3229
BT - Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition
PB - IEEE Computer Society
Y2 - 23 June 2014 through 28 June 2014
ER -