Abstract
We propose a novel formulation for approximating reachable sets through a minimum discounted reward optimal control problem. The formulation yields a continuous solution that can be obtained by solving a Hamilton-Jacobi equation. Furthermore, the numerical approximation to this solution is the unique fixed-point to a contraction mapping. This allows for more efficient solution methods that are not applicable under traditional formulations for solving reachable sets. Lastly, this formulation provides a link between reinforcement learning and learning reachable sets for systems with unknown dynamics, allowing algorithms from the former to be applied to the latter. We use two benchmark examples, double integrator, and pursuit-evasion games, to show the correctness of the formulation as well as its strengths in comparison to previous work.
Original language | English (US) |
---|---|
Pages (from-to) | 1097-1103 |
Number of pages | 7 |
Journal | IEEE Transactions on Automatic Control |
Volume | 69 |
Issue number | 2 |
DOIs | |
State | Published - Feb 1 2024 |
Externally published | Yes |
All Science Journal Classification (ASJC) codes
- Electrical and Electronic Engineering
- Control and Systems Engineering
- Computer Science Applications
Keywords
- Approximate reachability
- machine learning
- reachability analysis
- safety analysis