TY - GEN
T1 - Learning to Control in Metric Space with Optimal Regret
AU - Ni, Chengzhuo
AU - Yang, Lin F.
AU - Wang, Mengdi
PY - 2019/9
Y1 - 2019/9
N2 - We study online reinforcement learning for finite-horizon deterministic control systems with arbitrary state and action spaces. Suppose the transition dynamics and reward function is unknown, but the state and action space is endowed with a metric that characterizes the proximity between different states and actions. We provide a surprisingly simple upper-confidence reinforcement learning algorithm that uses a function approximation oracle to estimate optimistic Q functions from experiences. We show that the regret of the algorithm after K episodes is o(DLK)^{\frac{d}{d+1}}H where D is the diameter of the state-action space, L is a smoothness parameter, and d is the doubling dimension of the state-action space with respect to the given metric. We also establish a near-matching regret lower bound. The proposed method can be adapted to work for more structured transition systems, including the finite-state case and the case where value functions are linear combinations of features, where the method also achieve the optimal regret.
AB - We study online reinforcement learning for finite-horizon deterministic control systems with arbitrary state and action spaces. Suppose the transition dynamics and reward function is unknown, but the state and action space is endowed with a metric that characterizes the proximity between different states and actions. We provide a surprisingly simple upper-confidence reinforcement learning algorithm that uses a function approximation oracle to estimate optimistic Q functions from experiences. We show that the regret of the algorithm after K episodes is o(DLK)^{\frac{d}{d+1}}H where D is the diameter of the state-action space, L is a smoothness parameter, and d is the doubling dimension of the state-action space with respect to the given metric. We also establish a near-matching regret lower bound. The proposed method can be adapted to work for more structured transition systems, including the finite-state case and the case where value functions are linear combinations of features, where the method also achieve the optimal regret.
UR - http://www.scopus.com/inward/record.url?scp=85077796609&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85077796609&partnerID=8YFLogxK
U2 - 10.1109/ALLERTON.2019.8919864
DO - 10.1109/ALLERTON.2019.8919864
M3 - Conference contribution
T3 - 2019 57th Annual Allerton Conference on Communication, Control, and Computing, Allerton 2019
SP - 726
EP - 733
BT - 2019 57th Annual Allerton Conference on Communication, Control, and Computing, Allerton 2019
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 57th Annual Allerton Conference on Communication, Control, and Computing, Allerton 2019
Y2 - 24 September 2019 through 27 September 2019
ER -