TY - GEN

T1 - Model-based reinforcement learning with value-Targeted regression

AU - Ayoub, Alex

AU - Jia, Zeyu

AU - Szepesvari, Csaba

AU - Wang, Mengdi

AU - Yang, Lin F.

N1 - Funding Information:
Csaba Szepesvári gratefully acknowledges funding from the Canada CIFAR AI Chairs Program, Amii and NSERC. Mengdi Wang gratefully acknowledges funding from the U.S. National Science Foundation (NSF) grant CMMI-1653435, Air Force Office of Scientific Research (AFOSR) grant FA9550-19-1-020, and C3.ai DTI.
Publisher Copyright:
© ICML 2020. All rights reserved.

PY - 2020

Y1 - 2020

N2 - This paper studies model-based reinforcement learning (RL) for regret minimization. We focus on finite-horizon episodic RL where the transition model P belongs to a known family of models P, a special case of which is when models in P take the form of linear mixtures: P = Pd i=1 iPi. We propose a model based RL algorithm that is based on the optimism principle: In each episode, the set of models that are consistent with the data collected is constructed. The criterion of consistency is based on the total squared error that the model incurs on the task of predicting state values as determined by the last value estimate along the transitions. The next value function is then chosen by solving the optimistic planning problem with the constructed set of models. We derive a bound on the regret, which, in the special case of linear mixtures, takes the form O (dpH3T), where H, T and d are the horizon, the total number of steps and the dimension of , respectively. In particular, this regret bound is independent of the total number of states or actions, and is close to a lower bound (pHdT). For a general model family P, the regret bound is derived based on the Eluder dimension.

AB - This paper studies model-based reinforcement learning (RL) for regret minimization. We focus on finite-horizon episodic RL where the transition model P belongs to a known family of models P, a special case of which is when models in P take the form of linear mixtures: P = Pd i=1 iPi. We propose a model based RL algorithm that is based on the optimism principle: In each episode, the set of models that are consistent with the data collected is constructed. The criterion of consistency is based on the total squared error that the model incurs on the task of predicting state values as determined by the last value estimate along the transitions. The next value function is then chosen by solving the optimistic planning problem with the constructed set of models. We derive a bound on the regret, which, in the special case of linear mixtures, takes the form O (dpH3T), where H, T and d are the horizon, the total number of steps and the dimension of , respectively. In particular, this regret bound is independent of the total number of states or actions, and is close to a lower bound (pHdT). For a general model family P, the regret bound is derived based on the Eluder dimension.

UR - http://www.scopus.com/inward/record.url?scp=85105143400&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=85105143400&partnerID=8YFLogxK

M3 - Conference contribution

AN - SCOPUS:85105143400

T3 - 37th International Conference on Machine Learning, ICML 2020

SP - 440

EP - 451

BT - 37th International Conference on Machine Learning, ICML 2020

A2 - Daume, Hal

A2 - Singh, Aarti

PB - International Machine Learning Society (IMLS)

T2 - 37th International Conference on Machine Learning, ICML 2020

Y2 - 13 July 2020 through 18 July 2020

ER -