Randomized linear programming solves the Markov decision problem in nearly linear (sometimes sublinear) time

Research output: Contribution to journalArticlepeer-review

24 Scopus citations

Abstract

We propose a novel randomized linear programming algorithm for approximating the optimal policy of the discounted-reward and average-reward Markov decision problems. By leveraging the value-policy duality, the algorithm adaptively samples state-action-state transitions and makes exponentiated primal-dual updates. We show that it finds an ε-optimal policy using nearly linear runtime in the worst case for a fixed value of the discount factor. When the Markov decision process is ergodic and specified in some special data formats, for fixed values of certain ergodicity parameters, the algorithm finds an ε-optimal policy using sample size and time linear in the total number of state-action pairs, which is sublinear in the input size. These results provide a new venue and complexity benchmarks for solving stochastic dynamic programs.

Original languageEnglish (US)
Pages (from-to)517-546
Number of pages30
JournalMathematics of Operations Research
Volume45
Issue number2
DOIs
StatePublished - May 2020

All Science Journal Classification (ASJC) codes

  • General Mathematics
  • Computer Science Applications
  • Management Science and Operations Research

Keywords

  • Duality
  • Linear programming
  • Markov decision process
  • Primal-dual method
  • Randomized algorithm
  • Runtime complexity
  • Stochastic approximation

Fingerprint

Dive into the research topics of 'Randomized linear programming solves the Markov decision problem in nearly linear (sometimes sublinear) time'. Together they form a unique fingerprint.

Cite this