TY - GEN
T1 - Robotic pick-and-place of novel objects in clutter with multi-affordance grasping and cross-domain image matching
AU - Zeng, Andy
AU - Song, Shuran
AU - Yu, Kuan Ting
AU - Donlon, Elliott
AU - Hogan, Francois R.
AU - Bauza, Maria
AU - Ma, Daolin
AU - Taylor, Orion
AU - Liu, Melody
AU - Romo, Eudald
AU - Fazeli, Nima
AU - Alet, Ferran
AU - Dafle, Nikhil Chavan
AU - Holladay, Rachel
AU - Morena, Isabella
AU - Qu Nair, Prem
AU - Green, Druck
AU - Taylor, Ian
AU - Liu, Weber
AU - Funkhouser, Thomas
AU - Rodriguez, Alberto
N1 - Publisher Copyright:
© 2018 IEEE.
PY - 2018/9/10
Y1 - 2018/9/10
N2 - This paper presents a robotic pick-and-place system that is capable of grasping and recognizing both known and novel objects in cluttered environments. The key new feature of the system is that it handles a wide range of object categories without needing any task-specific training data for novel objects. To achieve this, it first uses a category-agnostic affordance prediction algorithm to select and execute among four different grasping primitive behaviors. It then recognizes picked objects with a cross-domain image classification framework that matches observed images to product images. Since product images are readily available for a wide range of objects (e.g., from the web), the system works out-of-the-box for novel objects without requiring any additional training data. Exhaustive experimental results demonstrate that our multi-affordance grasping achieves high success rates for a wide variety of objects in clutter, and our recognition algorithm achieves high accuracy for both known and novel grasped objects. The approach was part of the MIT-Princeton Team system that took 1st place in the stowing task at the 2017 Amazon Robotics Challenge. All code, datasets, and pre-trained models are available online at http://arc.cs.princeton.edu.
AB - This paper presents a robotic pick-and-place system that is capable of grasping and recognizing both known and novel objects in cluttered environments. The key new feature of the system is that it handles a wide range of object categories without needing any task-specific training data for novel objects. To achieve this, it first uses a category-agnostic affordance prediction algorithm to select and execute among four different grasping primitive behaviors. It then recognizes picked objects with a cross-domain image classification framework that matches observed images to product images. Since product images are readily available for a wide range of objects (e.g., from the web), the system works out-of-the-box for novel objects without requiring any additional training data. Exhaustive experimental results demonstrate that our multi-affordance grasping achieves high success rates for a wide variety of objects in clutter, and our recognition algorithm achieves high accuracy for both known and novel grasped objects. The approach was part of the MIT-Princeton Team system that took 1st place in the stowing task at the 2017 Amazon Robotics Challenge. All code, datasets, and pre-trained models are available online at http://arc.cs.princeton.edu.
UR - http://www.scopus.com/inward/record.url?scp=85063150723&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85063150723&partnerID=8YFLogxK
U2 - 10.1109/ICRA.2018.8461044
DO - 10.1109/ICRA.2018.8461044
M3 - Conference contribution
AN - SCOPUS:85063150723
T3 - Proceedings - IEEE International Conference on Robotics and Automation
SP - 3750
EP - 3757
BT - 2018 IEEE International Conference on Robotics and Automation, ICRA 2018
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 2018 IEEE International Conference on Robotics and Automation, ICRA 2018
Y2 - 21 May 2018 through 25 May 2018
ER -