Optimistic Natural Policy Gradient: a Simple Efficient Policy Optimization Framework for Online RL

Qinghua Liu, Chi Jin, Gellért Weisz, András György, Csaba Szepesvári

Research output: Contribution to journalConference articlepeer-review

2 Scopus citations

Abstract

While policy optimization algorithms have played an important role in recent empirical success of Reinforcement Learning (RL), the existing theoretical understanding of policy optimization remains rather limited-they are either restricted to tabular MDPs or suffer from highly suboptimal sample complexity, especially in online RL where exploration is necessary. This paper proposes a simple efficient policy optimization framework-OPTIMISTIC NPG for online RL. OPTIMISTIC NPG can be viewed as a simple combination of the classic natural policy gradient (NPG) algorithm [Kakade, 2001] and an optimistic policy evaluation subroutine to encourage exploration. For d-dimensional linear MDPs, OPTIMISTIC NPG is computationally efficient, and learns an ϵ-optimal policy within Õ(d23) samples, which is the first computationally efficient algorithm whose sample complexity has the optimal dimension dependence Θ̃(d2). It also improves over state-of-the-art results of policy optimization algorithms [Zanette et al., 2021] by a factor of d. For general function approximation that subsumes linear MDPs, OPTIMISTIC NPG, to our best knowledge, is also the first policy optimization algorithm that achieves the polynomial sample complexity for learning near-optimal policies.

Original languageEnglish (US)
JournalAdvances in Neural Information Processing Systems
Volume36
StatePublished - 2023
Event37th Conference on Neural Information Processing Systems, NeurIPS 2023 - New Orleans, United States
Duration: Dec 10 2023Dec 16 2023

All Science Journal Classification (ASJC) codes

  • Computer Networks and Communications
  • Information Systems
  • Signal Processing

Fingerprint

Dive into the research topics of 'Optimistic Natural Policy Gradient: a Simple Efficient Policy Optimization Framework for Online RL'. Together they form a unique fingerprint.

Cite this