Sample complexity of asynchronous Q-learning: Sharper analysis and variance reduction

Gen Li, Yuting Wei, Yuejie Chi, Yuantao Gu, Yuxin Chen

Research output: Contribution to journalConference articlepeer-review

41 Scopus citations

Abstract

Asynchronous Q-learning aims to learn the optimal action-value function (or Q-function) of a Markov decision process (MDP), based on a single trajectory of Markovian samples induced by a behavior policy. Focusing on a ?-discounted MDP with state space S and action space A, we demonstrate that the l8-based sample complexity of classical asynchronous Q-learning — namely, the number of samples needed to yield an entrywise e-accurate estimate of the Q-function — is at most on the order of 1 tmix µmin(1 - ?)5e2 + µmin(1 - ?) up to some logarithmic factor, provided that a proper constant learning rate is adopted. Here, tmix and µmin denote respectively the mixing time and the minimum state-action occupancy probability of the sample trajectory. The first term of this bound matches the complexity in the case with independent samples drawn from the stationary distribution of the trajectory. The second term reflects the expense taken for the empirical distribution of the Markovian trajectory to reach a steady state, which is incurred at the very beginning and becomes amortized as the algorithm runs. Encouragingly, the above bound improves upon the state-of-the-art result by a factor of at least |S||A|. Further, the scaling on the discount complexity can be improved by means of variance reduction.

Original languageEnglish (US)
JournalAdvances in Neural Information Processing Systems
Volume2020-December
StatePublished - 2020
Event34th Conference on Neural Information Processing Systems, NeurIPS 2020 - Virtual, Online
Duration: Dec 6 2020Dec 12 2020

All Science Journal Classification (ASJC) codes

  • Computer Networks and Communications
  • Information Systems
  • Signal Processing

Fingerprint

Dive into the research topics of 'Sample complexity of asynchronous Q-learning: Sharper analysis and variance reduction'. Together they form a unique fingerprint.

Cite this