Robotic pick-and-place of novel objects in clutter with multi-affordance grasping and cross-domain image matching

Andy Zeng, Shuran Song, Kuan Ting Yu, Elliott Donlon, Francois R. Hogan, Maria Bauza, Daolin Ma, Orion Taylor, Melody Liu, Eudald Romo, Nima Fazeli, Ferran Alet, Nikhil Chavan Dafle, Rachel Holladay, Isabella Morona, Prem Qu Nair, Druck Green, Ian Taylor, Weber Liu, Thomas FunkhouserAlberto Rodriguez

Research output: Contribution to journalArticle

2 Scopus citations

Abstract

This article presents a robotic pick-and-place system that is capable of grasping and recognizing both known and novel objects in cluttered environments. The key new feature of the system is that it handles a wide range of object categories without needing any task-specific training data for novel objects. To achieve this, it first uses an object-agnostic grasping framework to map from visual observations to actions: inferring dense pixel-wise probability maps of the affordances for four different grasping primitive actions. It then executes the action with the highest affordance and recognizes picked objects with a cross-domain image classification framework that matches observed images to product images. Since product images are readily available for a wide range of objects (e.g., from the web), the system works out-of-the-box for novel objects without requiring any additional data collection or re-training. Exhaustive experimental results demonstrate that our multi-affordance grasping achieves high success rates for a wide variety of objects in clutter, and our recognition algorithm achieves high accuracy for both known and novel grasped objects. The approach was part of the MIT–Princeton Team system that took first place in the stowing task at the 2017 Amazon Robotics Challenge. All code, datasets, and pre-trained models are available online at http://arc.cs.princeton.edu/.

Original languageEnglish (US)
JournalInternational Journal of Robotics Research
DOIs
StatePublished - Jan 1 2019

All Science Journal Classification (ASJC) codes

  • Software
  • Modeling and Simulation
  • Mechanical Engineering
  • Artificial Intelligence
  • Electrical and Electronic Engineering
  • Applied Mathematics

Keywords

  • Amazon Robotics Challenge
  • active perception
  • affordance learning
  • cross-domain image matching
  • deep learning
  • grasping
  • one-shot recognition
  • pick-and-place
  • vision for manipulation

Fingerprint Dive into the research topics of 'Robotic pick-and-place of novel objects in clutter with multi-affordance grasping and cross-domain image matching'. Together they form a unique fingerprint.

  • Cite this

    Zeng, A., Song, S., Yu, K. T., Donlon, E., Hogan, F. R., Bauza, M., Ma, D., Taylor, O., Liu, M., Romo, E., Fazeli, N., Alet, F., Chavan Dafle, N., Holladay, R., Morona, I., Nair, P. Q., Green, D., Taylor, I., Liu, W., ... Rodriguez, A. (2019). Robotic pick-and-place of novel objects in clutter with multi-affordance grasping and cross-domain image matching. International Journal of Robotics Research. https://doi.org/10.1177/0278364919868017