TY - GEN
T1 - Gated2Depth
T2 - 17th IEEE/CVF International Conference on Computer Vision, ICCV 2019
AU - Gruber, Tobias
AU - Julca-Aguilar, Frank
AU - Bijelic, Mario
AU - Heide, Felix
N1 - Funding Information:
This work has received funding from the European Union under the H2020 ECSEL Programme as part of the DENSE project, contract number 692449. Werner Ritter supervised this project at Daimler AG, and Klaus Dietmayer supervised the project portion at Ulm University. We thank Robert Bhler, Stefanie Walz and Yao Wang for help processing the large dataset. We thank Fahim Mannan for fruitful discussions and comments on the manuscript.
Publisher Copyright:
© 2019 IEEE.
PY - 2019/10
Y1 - 2019/10
N2 - We present an imaging framework which converts three images from a gated camera into high-resolution depth maps with depth accuracy comparable to pulsed lidar measurements. Existing scanning lidar systems achieve low spatial resolution at large ranges due to mechanically-limited angular sampling rates, restricting scene understanding tasks to close-range clusters with dense sampling. Moreover, today's pulsed lidar scanners suffer from high cost, power consumption, large form-factors, and they fail in the presence of strong backscatter. We depart from point scanning and demonstrate that it is possible to turn a low-cost CMOS gated imager into a dense depth camera with at least 80m range - by learning depth from three gated images. The proposed architecture exploits semantic context across gated slices, and is trained on a synthetic discriminator loss without the need of dense depth labels. The proposed replacement for scanning lidar systems is real-time, handles back-scatter and provides dense depth at long ranges. We validate our approach in simulation and on real-world data acquired over 4,000km driving in northern Europe. Data and code are available at https://github.com/gruberto/Gated2Depth.
AB - We present an imaging framework which converts three images from a gated camera into high-resolution depth maps with depth accuracy comparable to pulsed lidar measurements. Existing scanning lidar systems achieve low spatial resolution at large ranges due to mechanically-limited angular sampling rates, restricting scene understanding tasks to close-range clusters with dense sampling. Moreover, today's pulsed lidar scanners suffer from high cost, power consumption, large form-factors, and they fail in the presence of strong backscatter. We depart from point scanning and demonstrate that it is possible to turn a low-cost CMOS gated imager into a dense depth camera with at least 80m range - by learning depth from three gated images. The proposed architecture exploits semantic context across gated slices, and is trained on a synthetic discriminator loss without the need of dense depth labels. The proposed replacement for scanning lidar systems is real-time, handles back-scatter and provides dense depth at long ranges. We validate our approach in simulation and on real-world data acquired over 4,000km driving in northern Europe. Data and code are available at https://github.com/gruberto/Gated2Depth.
UR - http://www.scopus.com/inward/record.url?scp=85080398519&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85080398519&partnerID=8YFLogxK
U2 - 10.1109/ICCV.2019.00159
DO - 10.1109/ICCV.2019.00159
M3 - Conference contribution
AN - SCOPUS:85080398519
T3 - Proceedings of the IEEE International Conference on Computer Vision
SP - 1506
EP - 1516
BT - Proceedings - 2019 International Conference on Computer Vision, ICCV 2019
PB - Institute of Electrical and Electronics Engineers Inc.
Y2 - 27 October 2019 through 2 November 2019
ER -