TY - JOUR
T1 - Is Q-learning provably efficient?
AU - Jin, Chi
AU - Allen-Zhu, Zeyuan
AU - Bubeck, Sebastien
AU - Jordan, Michael I.
N1 - Funding Information:
We thank Nan Jiang, Sham M. Kakade, Greg Yang and Chicheng Zhang for valuable discussions. This work was supported in part by the DARPA program on Lifelong Learning Machines, and Microsoft Research Gratis Traveler program.
Publisher Copyright:
© 2018 Curran Associates Inc..All rights reserved.
PY - 2018
Y1 - 2018
N2 - Model-free reinforcement learning (RL) algorithms, such as Q-learning, directly parameterize and update value functions or policies without explicitly modeling the environment. They are typically simpler, more flexible to use, and thus more prevalent in modern deep RL than model-based approaches. However, empirical work has suggested that model-free algorithms may require more samples to learn [7, 22]. The theoretical question of “whether model-free algorithms can be made sample efficient” is one of the most fundamental questions in RL, and remains unsolved even in the basic scenario with finitely many states and actions. We prove that, in an episodic MDP setting, Q-learning with UCB exploration achieves regret Õ(H3SAT), where S and A are the numbers of states and actions, H is the number of steps per episode, and T is the total number of steps. This sample efficiency matches the optimal regret that can be achieved by any model-based approach, up to a single H factor. To the best of our knowledge, this is the first analysis in the model-free setting that establishes T regret without requiring access to a “simulator.”
AB - Model-free reinforcement learning (RL) algorithms, such as Q-learning, directly parameterize and update value functions or policies without explicitly modeling the environment. They are typically simpler, more flexible to use, and thus more prevalent in modern deep RL than model-based approaches. However, empirical work has suggested that model-free algorithms may require more samples to learn [7, 22]. The theoretical question of “whether model-free algorithms can be made sample efficient” is one of the most fundamental questions in RL, and remains unsolved even in the basic scenario with finitely many states and actions. We prove that, in an episodic MDP setting, Q-learning with UCB exploration achieves regret Õ(H3SAT), where S and A are the numbers of states and actions, H is the number of steps per episode, and T is the total number of steps. This sample efficiency matches the optimal regret that can be achieved by any model-based approach, up to a single H factor. To the best of our knowledge, this is the first analysis in the model-free setting that establishes T regret without requiring access to a “simulator.”
UR - http://www.scopus.com/inward/record.url?scp=85064819866&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85064819866&partnerID=8YFLogxK
M3 - Conference article
AN - SCOPUS:85064819866
SN - 1049-5258
VL - 2018-December
SP - 4863
EP - 4873
JO - Advances in Neural Information Processing Systems
JF - Advances in Neural Information Processing Systems
T2 - 32nd Conference on Neural Information Processing Systems, NeurIPS 2018
Y2 - 2 December 2018 through 8 December 2018
ER -