TY - GEN
T1 - Probabilistically Safe Robot Planning with Confidence-Based Human Predictions
AU - Fisac, Jaime F.
AU - Bajcsy, Andrea
AU - Herbert, Sylvia L.
AU - Fridovich-Keil, David
AU - Wang, Steven
AU - Tomlin, Claire J.
AU - Dragan, Anca D.
N1 - Publisher Copyright:
© 2018, MIT Press Journals. All rights reserved.
PY - 2018
Y1 - 2018
N2 - In order to safely operate around humans, robots can employ predictive models of human motion. Unfortunately, these models cannot capture the full complexity of human behavior and necessarily introduce simplifying assumptions. As a result, predictions may degrade whenever the observed human behavior departs from the assumed structure, which can have negative implications for safety. In this paper, we observe that how “rational” human actions appear under a particular model can be viewed as an indicator of that model’s ability to describe the human’s current motion. By reasoning about this model confidence in a real-time Bayesian framework, we show that the robot can very quickly modulate its predictions to become more uncertain when the model performs poorly. Building on recent work in provably-safe trajectory planning, we leverage these confidence-aware human motion predictions to generate assured autonomous robot motion. Our new analysis combines worst-case tracking error guarantees for the physical robot with probabilistic time-varying human predictions, yielding a quantitative, probabilistic safety certificate. We demonstrate our approach with a quadcopter navigating around a human.
AB - In order to safely operate around humans, robots can employ predictive models of human motion. Unfortunately, these models cannot capture the full complexity of human behavior and necessarily introduce simplifying assumptions. As a result, predictions may degrade whenever the observed human behavior departs from the assumed structure, which can have negative implications for safety. In this paper, we observe that how “rational” human actions appear under a particular model can be viewed as an indicator of that model’s ability to describe the human’s current motion. By reasoning about this model confidence in a real-time Bayesian framework, we show that the robot can very quickly modulate its predictions to become more uncertain when the model performs poorly. Building on recent work in provably-safe trajectory planning, we leverage these confidence-aware human motion predictions to generate assured autonomous robot motion. Our new analysis combines worst-case tracking error guarantees for the physical robot with probabilistic time-varying human predictions, yielding a quantitative, probabilistic safety certificate. We demonstrate our approach with a quadcopter navigating around a human.
UR - http://www.scopus.com/inward/record.url?scp=85127770476&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85127770476&partnerID=8YFLogxK
U2 - 10.15607/RSS.2018.XIV.069
DO - 10.15607/RSS.2018.XIV.069
M3 - Conference contribution
AN - SCOPUS:85127770476
SN - 9780992374747
T3 - Robotics: Science and Systems
BT - Robotics
A2 - Kress-Gazit, Hadas
A2 - Srinivasa, Siddhartha S.
A2 - Howard, Tom
A2 - Atanasov, Nikolay
PB - MIT Press Journals
T2 - 14th Robotics: Science and Systems, RSS 2018
Y2 - 26 June 2018 through 30 June 2018
ER -