Learning Hierarchical Semantic Segmentations of LIDAR Data

David Dohan, Brian Matejek, Thomas Funkhouser

Research output: Chapter in Book/Report/Conference proceedingConference contribution

30 Scopus citations

Abstract

This paper investigates a method for semantic segmentation of small objects in terrestrial LIDAR scans in urban environments. The core research contribution is a hierarchical segmentation algorithm where potential merges between segments are prioritized by a learned affinity function and constrained to occur only if they achieve a significantly high object classification probability. This approach provides a way to integrate a learned shape-prior (the object classifier) into a search for the best semantic segmentation in a fast and practical algorithm. Experiments with LIDAR scans collected by Google Street View cars throughout ∼ 100 city blocks of New York City show that the algorithm provides better segmentations and classifications than simple alternatives for cars, vans, traffic lights, and street lights.

Original languageEnglish (US)
Title of host publicationProceedings - 2015 International Conference on 3D Vision, 3DV 2015
EditorsMichael Brown, Jana Kosecka, Christian Theobalt
PublisherInstitute of Electrical and Electronics Engineers Inc.
Pages273-281
Number of pages9
ISBN (Electronic)9781467383325
DOIs
StatePublished - Nov 20 2015
Event2015 International Conference on 3D Vision, 3DV 2015 - Lyon, France
Duration: Oct 19 2015Oct 22 2015

Publication series

NameProceedings - 2015 International Conference on 3D Vision, 3DV 2015

Other

Other2015 International Conference on 3D Vision, 3DV 2015
Country/TerritoryFrance
CityLyon
Period10/19/1510/22/15

All Science Journal Classification (ASJC) codes

  • Computer Vision and Pattern Recognition

Keywords

  • Google
  • Image segmentation
  • Laser radar
  • Semantics
  • Shape
  • Three-dimensional displays
  • Training

Fingerprint

Dive into the research topics of 'Learning Hierarchical Semantic Segmentations of LIDAR Data'. Together they form a unique fingerprint.

Cite this