Abstract
This paper derives an optimal control strategy for a simple stochastic dynamical system with constant drift and an additive control input. Motivated by the example of a physical system with an unexpected change in its dynamics, we take the drift parameter to be unknown, so that it must be learned while controlling the system. The state of the system is observed through a linear observation model with Gaussian noise. In contrast to most previous work, which focuses on a controller’s asymptotic performance over an infinite time horizon, we minimize a quadratic cost function over a finite time horizon. The performance of our control strategy is quantified by comparing its cost with the cost incurred by an optimal controller that has full knowledge of the parameters. This approach gives rise to several notions of “regret.” We derive a set of control strategies that provably minimize the worst-case regret, which arise from Bayesian strategies that assume a specific fixed prior on the drift parameter. This work suggests that examining Bayesian strategies may lead to optimal or near-optimal control strategies for a much larger class of realistic dynamical models with unknown parameters.
Original language | English (US) |
---|---|
Pages (from-to) | 870-880 |
Number of pages | 11 |
Journal | Proceedings of Machine Learning Research |
Volume | 168 |
State | Published - 2022 |
Event | 4th Annual Learning for Dynamics and Control Conference, L4DC 2022 - Stanford, United States Duration: Jun 23 2022 → Jun 24 2022 |
All Science Journal Classification (ASJC) codes
- Artificial Intelligence
- Software
- Control and Systems Engineering
- Statistics and Probability
Keywords
- adaptive control
- competitive ratio
- learning
- optimal control
- regret