TY - GEN
T1 - Bilinear Classes
T2 - 38th International Conference on Machine Learning, ICML 2021
AU - Du, Simon S.
AU - Kakade, Sham M.
AU - Lee, Jason D.
AU - Lovett, Shachar
AU - Mahajan, Gaurav
AU - Sun, Wen
AU - Wang, Ruosong
N1 - Publisher Copyright:
Copyright © 2021 by the author(s)
PY - 2021
Y1 - 2021
N2 - This work introduces Bilinear Classes, a new structural framework, which permit generalization in reinforcement learning in a wide variety of settings through the use of function approximation. The framework incorporates nearly all existing models in which a polynomial sample complexity is achievable, and, notably, also includes new models, such as the Linear Q∗/V ∗ model in which both the optimal Q-function and the optimal V -function are linear in some known feature space. Our main result provides an RL algorithm which has polynomial sample complexity for Bilinear Classes; notably, this sample complexity is stated in terms of a reduction to the generalization error of an underlying supervised learning sub-problem. These bounds nearly match the best known sample complexity bounds for existing models. Furthermore, this framework also extends to the infinite dimensional (RKHS) setting: for the the Linear Q∗/V ∗ model, linear MDPs, and linear mixture MDPs, we provide sample complexities that have no explicit dependence on the explicit feature dimension (which could be infinite), but instead depends only on information theoretic quantities.
AB - This work introduces Bilinear Classes, a new structural framework, which permit generalization in reinforcement learning in a wide variety of settings through the use of function approximation. The framework incorporates nearly all existing models in which a polynomial sample complexity is achievable, and, notably, also includes new models, such as the Linear Q∗/V ∗ model in which both the optimal Q-function and the optimal V -function are linear in some known feature space. Our main result provides an RL algorithm which has polynomial sample complexity for Bilinear Classes; notably, this sample complexity is stated in terms of a reduction to the generalization error of an underlying supervised learning sub-problem. These bounds nearly match the best known sample complexity bounds for existing models. Furthermore, this framework also extends to the infinite dimensional (RKHS) setting: for the the Linear Q∗/V ∗ model, linear MDPs, and linear mixture MDPs, we provide sample complexities that have no explicit dependence on the explicit feature dimension (which could be infinite), but instead depends only on information theoretic quantities.
UR - http://www.scopus.com/inward/record.url?scp=85161285414&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85161285414&partnerID=8YFLogxK
M3 - Conference contribution
AN - SCOPUS:85161285414
T3 - Proceedings of Machine Learning Research
SP - 2826
EP - 2836
BT - Proceedings of the 38th International Conference on Machine Learning, ICML 2021
PB - ML Research Press
Y2 - 18 July 2021 through 24 July 2021
ER -