TY - GEN
T1 - Spatialsense
T2 - 17th IEEE/CVF International Conference on Computer Vision, ICCV 2019
AU - Yang, Kaiyu
AU - Russakovsky, Olga
AU - Deng, Jia
N1 - Publisher Copyright:
© 2019 IEEE.
PY - 2019/10
Y1 - 2019/10
N2 - Understanding the spatial relations between objects in images is a surprisingly challenging task. A chair may be 'behind' a person even if it appears to the left of the person in the image (depending on which way the person is facing). Two students that appear close to each other in the image may not in fact be 'next to' each other if there is a third student between them. We introduce SpatialSense, a dataset specializing in spatial relation recognition which captures a broad spectrum of such challenges, allowing for proper benchmarking of computer vision techniques. SpatialSense is constructed through adversarial crowdsourcing, in which human annotators are tasked with finding spatial relations that are difficult to predict using simple cues such as 2D spatial configuration or language priors. Adversarial crowdsourcing significantly reduces dataset bias and samples more interesting relations in the long tail compared to existing datasets. On SpatialSense, state-of-the-art recognition models perform comparably to simple baselines, suggesting that they rely on straightforward cues instead of fully reasoning about this complex task. The SpatialSense benchmark provides a path forward to advancing the spatial reasoning capabilities of computer vision systems. The dataset and code are available at https://github.com/princeton-vl/SpatialSense.
AB - Understanding the spatial relations between objects in images is a surprisingly challenging task. A chair may be 'behind' a person even if it appears to the left of the person in the image (depending on which way the person is facing). Two students that appear close to each other in the image may not in fact be 'next to' each other if there is a third student between them. We introduce SpatialSense, a dataset specializing in spatial relation recognition which captures a broad spectrum of such challenges, allowing for proper benchmarking of computer vision techniques. SpatialSense is constructed through adversarial crowdsourcing, in which human annotators are tasked with finding spatial relations that are difficult to predict using simple cues such as 2D spatial configuration or language priors. Adversarial crowdsourcing significantly reduces dataset bias and samples more interesting relations in the long tail compared to existing datasets. On SpatialSense, state-of-the-art recognition models perform comparably to simple baselines, suggesting that they rely on straightforward cues instead of fully reasoning about this complex task. The SpatialSense benchmark provides a path forward to advancing the spatial reasoning capabilities of computer vision systems. The dataset and code are available at https://github.com/princeton-vl/SpatialSense.
UR - http://www.scopus.com/inward/record.url?scp=85081921368&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85081921368&partnerID=8YFLogxK
U2 - 10.1109/ICCV.2019.00214
DO - 10.1109/ICCV.2019.00214
M3 - Conference contribution
AN - SCOPUS:85081921368
T3 - Proceedings of the IEEE International Conference on Computer Vision
SP - 2051
EP - 2060
BT - Proceedings - 2019 International Conference on Computer Vision, ICCV 2019
PB - Institute of Electrical and Electronics Engineers Inc.
Y2 - 27 October 2019 through 2 November 2019
ER -