TY - GEN
T1 - Spatial Action Maps for Mobile Manipulation
AU - Wu, Jimmy
AU - Sun, Xingyuan
AU - Zeng, Andy
AU - Song, Shuran
AU - Lee, Johnny
AU - Rusinkiewicz, Szymon
AU - Funkhouser, Thomas
N1 - Publisher Copyright:
© 2020, MIT Press Journals. All rights reserved.
PY - 2020
Y1 - 2020
N2 - Typical end-to-end formulations for learning robotic navigation involve predicting a small set of steering command actions (e.g., step forward, turn left, turn right, etc.) from images of the current state (e.g., a bird’s-eye view of a SLAM reconstruction). Instead, we show that it can be advantageous to learn with dense action representations defined in the same domain as the state. In this work, we present “spatial action maps,” in which the set of possible actions is represented by a pixel map (aligned with the input image of the current state), where each pixel represents a local navigational endpoint at the corresponding scene location. Using ConvNets to infer spatial action maps from state images, action predictions are thereby spatially anchored on local visual features in the scene, enabling significantly faster learning of complex behaviors for mobile manipulation tasks with reinforcement learning. In our experiments, we task a robot with pushing objects to a goal location, and find that policies learned with spatial action maps achieve much better performance than traditional alternatives.
AB - Typical end-to-end formulations for learning robotic navigation involve predicting a small set of steering command actions (e.g., step forward, turn left, turn right, etc.) from images of the current state (e.g., a bird’s-eye view of a SLAM reconstruction). Instead, we show that it can be advantageous to learn with dense action representations defined in the same domain as the state. In this work, we present “spatial action maps,” in which the set of possible actions is represented by a pixel map (aligned with the input image of the current state), where each pixel represents a local navigational endpoint at the corresponding scene location. Using ConvNets to infer spatial action maps from state images, action predictions are thereby spatially anchored on local visual features in the scene, enabling significantly faster learning of complex behaviors for mobile manipulation tasks with reinforcement learning. In our experiments, we task a robot with pushing objects to a goal location, and find that policies learned with spatial action maps achieve much better performance than traditional alternatives.
UR - http://www.scopus.com/inward/record.url?scp=85099845773&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85099845773&partnerID=8YFLogxK
U2 - 10.15607/RSS.2020.XVI.035
DO - 10.15607/RSS.2020.XVI.035
M3 - Conference contribution
AN - SCOPUS:85099845773
SN - 9780992374761
T3 - Robotics: Science and Systems
BT - Robotics
A2 - Toussaint, Marc
A2 - Bicchi, Antonio
A2 - Hermans, Tucker
PB - MIT Press Journals
T2 - 16th Robotics: Science and Systems, RSS 2020
Y2 - 12 July 2020 through 16 July 2020
ER -