Abstract
Reinforcement learning for robotic applications faces the challenge of constraint satisfaction, which currently impedes its application to safety critical systems. Recent approaches successfully introduce safety based on reachability analysis, determining a safe region of the state space where the system can operate. However, overly constraining the freedom of the system can negatively affect performance, while attempting to learn less conservative safety constraints might fail to preserve safety if the learned constraints are inaccurate. We propose a novel method that uses a principled approach to learn the system's unknown dynamics based on a Gaussian process model and iteratively approximates the maximal safe set. A modified control strategy based on real-time model validation preserves safety under weaker conditions than current approaches. Our framework further incorporates safety into the reinforcement learning performance metric, allowing a better integration of safety and learning. We demonstrate our algorithm on simulations of a cart-pole system and on an experimental quadrotor application and show how our proposed scheme succeeds in preserving safety where current approaches fail to avoid an unsafe condition.
Original language | English (US) |
---|---|
Article number | 7039601 |
Pages (from-to) | 1424-1431 |
Number of pages | 8 |
Journal | Proceedings of the IEEE Conference on Decision and Control |
Volume | 2015-February |
Issue number | February |
DOIs | |
State | Published - 2014 |
Event | 2014 53rd IEEE Annual Conference on Decision and Control, CDC 2014 - Los Angeles, United States Duration: Dec 15 2014 → Dec 17 2014 |
All Science Journal Classification (ASJC) codes
- Control and Systems Engineering
- Modeling and Simulation
- Control and Optimization