Online Sparse Reinforcement Learning

Botao Hao, Tor Lattimore, Csaba Szepesvári, Mengdi Wang

Research output: Contribution to journalConference articlepeer-review

17 Scopus citations

Abstract

We investigate the hardness of online reinforcement learning in fixed horizon, sparse linear Markov decision process (MDP), with a special focus on the high-dimensional regime where the ambient dimension is larger than the number of episodes. Our contribution is two-fold. First, we provide a lower bound showing that linear regret is generally unavoidable in this case, even if there exists a policy that collects well-conditioned data. The lower bound construction uses an MDP with a fixed number of states while the number of actions scales with the ambient dimension. Note that when the horizon is fixed to one, the case of linear stochastic bandits, the linear regret can be avoided. Second, we show that if the learner has oracle access to a policy that collects well-conditioned data then a variant of Lasso fitted Q-iteration enjoys a nearly dimension free regret of Oe(s2/3N2/3) where N is the number of episodes and s is the sparsity level. This shows that in the large-action setting, the difficulty of learning can be attributed to the difficulty of finding a good exploratory policy.

Original languageEnglish (US)
Pages (from-to)316-324
Number of pages9
JournalProceedings of Machine Learning Research
Volume130
StatePublished - 2021
Event24th International Conference on Artificial Intelligence and Statistics, AISTATS 2021 - Virtual, Online, United States
Duration: Apr 13 2021Apr 15 2021

All Science Journal Classification (ASJC) codes

  • Artificial Intelligence
  • Software
  • Control and Systems Engineering
  • Statistics and Probability

Fingerprint

Dive into the research topics of 'Online Sparse Reinforcement Learning'. Together they form a unique fingerprint.

Cite this