Abstract
An outstanding challenge for the widespread deployment of robotic systems like autonomous vehicles is ensuring safe interaction with humans without sacrificing performance. Existing safety methods often neglect the robot's ability to learn and adapt at runtime, leading to overly conservative behavior. This paper proposes a new closed-loop paradigm for synthesizing safe control policies that explicitly account for the robot's evolving uncertainty and its ability to quickly respond to future scenarios as they arise, by jointly considering the physical dynamics and the robot's learning algorithm. We leverage adversarial reinforcement learning for tractable safety analysis under high-dimensional learning dynamics and demonstrate our framework's ability to work with both Bayesian belief propagation and implicit learning through large pre-trained neural trajectory predictors.
Original language | English (US) |
---|---|
Journal | Proceedings of Machine Learning Research |
Volume | 229 |
State | Published - 2023 |
Event | 7th Conference on Robot Learning, CoRL 2023 - Atlanta, United States Duration: Nov 6 2023 → Nov 9 2023 |
All Science Journal Classification (ASJC) codes
- Artificial Intelligence
- Software
- Control and Systems Engineering
- Statistics and Probability
Keywords
- Active Information Gathering
- Adversarial Reinforcement Learning
- Learning-Aware Safety Analysis