What you should know about approximate dynamic programming

Warren Buckler Powell

Research output: Contribution to journalArticlepeer-review

123 Scopus citations

Abstract

Approximate dynamic programming (ADP) is a broad umbrella for a modeling and algorithmic strategy for solving problems that are sometimes large and complex, and are usually (but not always) stochastic. It is most often presented as a method for overcoming the classic curse of dimensionality that is well-known to plague the use of Bellman's equation. For many problems, there are actually up to three curses of dimensionality. But the richer message of approximate dynamic programming is learning what to learn, and how to learn it, to make better decisions over time. This article provides a brief review of approximate dynamic programming, without intending to be a complete tutorial. Instead, our goal is to provide a broader perspective of ADP and how it should be approached from the perspective of different problem classes.

Original languageEnglish (US)
Pages (from-to)239-249
Number of pages11
JournalNaval Research Logistics
Volume56
Issue number3
DOIs
StatePublished - Apr 2009

All Science Journal Classification (ASJC) codes

  • Modeling and Simulation
  • Ocean Engineering
  • Management Science and Operations Research

Keywords

  • Approximate dynamic programming
  • Monte carlo simulation
  • Neuro-dynamic programming
  • Reinforcement learning
  • Stochastic optimization

Fingerprint

Dive into the research topics of 'What you should know about approximate dynamic programming'. Together they form a unique fingerprint.

Cite this