Abstract
Learning robot objective functions from human input has become increasingly important, but state-of-the-art techniques assume that the human's desired objective lies within the robot's hypothesis space. When this is not true, even methods that keep track of uncertainty over the objective fail because they reason about which hypothesis might be correct, and not whether any of the hypotheses are correct. We focus specifically on learning from physical human corrections during the robot's task execution, where not having a rich enough hypothesis space leads to the robot updating its objective in ways that the person did not actually intend. We observe that such corrections appear irrelevant to the robot, because they are not the best way of achieving any of the candidate objectives. Instead of naively trusting and learning from every human interaction, we propose robots learn conservatively by reasoning in real time about how relevant the human's correction is for the robot's hypothesis space. We test our inference method in an experiment with human interaction data, and demonstrate that this alleviates unintended learning in an in-person user study with a robot manipulator.
Original language | English (US) |
---|---|
Pages (from-to) | 796-805 |
Number of pages | 10 |
Journal | Proceedings of Machine Learning Research |
Volume | 87 |
State | Published - 2018 |
Externally published | Yes |
Event | 2nd Conference on Robot Learning, CoRL 2018 - Zurich, Switzerland Duration: Oct 29 2018 → Oct 31 2018 |
All Science Journal Classification (ASJC) codes
- Artificial Intelligence
- Software
- Control and Systems Engineering
- Statistics and Probability
Keywords
- Bayesian inference
- inverse reinforcement learning
- model misspecification
- online learning
- physical human-robot interaction
- reward learning