Perspectives of approximate dynamic programming

Warren Buckler Powell

Research output: Contribution to journalArticlepeer-review

45 Scopus citations


Approximate dynamic programming has evolved, initially independently, within operations research, computer science and the engineering controls community, all searching for practical tools for solving sequential stochastic optimization problems. More so than other communities, operations research continued to develop the theory behind the basic model introduced by Bellman with discrete states and actions, even while authors as early as Bellman himself recognized its limits due to the “curse of dimensionality” inherent in discrete state spaces. In response to these limitations, subcommunities in computer science, control theory and operations research have developed a variety of methods for solving different classes of stochastic, dynamic optimization problems, creating the appearance of a jungle of competing approaches. In this article, we show that there is actually a common theme to these strategies, and underpinning the entire field remains the fundamental algorithmic strategies of value and policy iteration that were first introduced in the 1950’s and 60’s.

Original languageEnglish (US)
Pages (from-to)319-356
Number of pages38
JournalAnnals of Operations Research
Issue number1-2
StatePublished - Jun 1 2016

All Science Journal Classification (ASJC) codes

  • General Decision Sciences
  • Management Science and Operations Research


Dive into the research topics of 'Perspectives of approximate dynamic programming'. Together they form a unique fingerprint.

Cite this