Sample Complexity of Asynchronous Q-Learning: Sharper Analysis and Variance Reduction

Gen Li, Yuting Wei, Yuejie Chi, Yuantao Gu, Yuxin Chen

Research output: Contribution to journalArticlepeer-review

12 Scopus citations


Asynchronous Q-learning aims to learn the optimal action-value function (or Q-function) of a Markov decision process (MDP), based on a single trajectory of Markovian samples induced by a behavior policy. Focusing on a gamma -discounted MDP with state space mathcal S and action space mathcal A , we demonstrate that the ell _infty -based sample complexity of classical asynchronous Q-learning - namely, the number of samples needed to yield an entrywise varepsilon -accurate estimate of the Q-function - is at most on the order of frac 1 mu _mathsf min(1-gamma)^5varepsilon ^2+ frac t_mathsf mix mu _mathsf min(1-gamma) up to some logarithmic factor, provided that a proper constant learning rate is adopted. Here, t_mathsf mix and mu _mathsf min denote respectively the mixing time and the minimum state-action occupancy probability of the sample trajectory. The first term of this bound matches the sample complexity in the synchronous case with independent samples drawn from the stationary distribution of the trajectory. The second term reflects the cost taken for the empirical distribution of the Markovian trajectory to reach a steady state, which is incurred at the very beginning and becomes amortized as the algorithm runs. Encouragingly, the above bound improves upon the state-of-the-art result by a factor of at least |mathcal S||mathcal A| for all scenarios, and by a factor of at least t_mathsf mix|mathcal S||mathcal A| for any sufficiently small accuracy level varepsilon . Further, we demonstrate that the scaling on the effective horizon frac 11-gamma can be improved by means of variance reduction.

Original languageEnglish (US)
Pages (from-to)448-473
Number of pages26
JournalIEEE Transactions on Information Theory
Issue number1
StatePublished - Jan 1 2022

All Science Journal Classification (ASJC) codes

  • Information Systems
  • Computer Science Applications
  • Library and Information Sciences


  • Markovian samples
  • Model-free reinforcement learning
  • TD learning
  • asynchronous Q-learning
  • mixing time
  • variance reduction


Dive into the research topics of 'Sample Complexity of Asynchronous Q-Learning: Sharper Analysis and Variance Reduction'. Together they form a unique fingerprint.

Cite this