Abstract
Approximate dynamic programming has evolved, initially independently, within operations research, computer science and the engineering controls community, all searching for practical tools for solving sequential stochastic optimization problems. More so than other communities, operations research continued to develop the theory behind the basic model introduced by Bellman with discrete states and actions, even while authors as early as Bellman himself recognized its limits due to the “curse of dimensionality” inherent in discrete state spaces. In response to these limitations, subcommunities in computer science, control theory and operations research have developed a variety of methods for solving different classes of stochastic, dynamic optimization problems, creating the appearance of a jungle of competing approaches. In this article, we show that there is actually a common theme to these strategies, and underpinning the entire field remains the fundamental algorithmic strategies of value and policy iteration that were first introduced in the 1950’s and 60’s.
Original language | English (US) |
---|---|
Pages (from-to) | 319-356 |
Number of pages | 38 |
Journal | Annals of Operations Research |
Volume | 241 |
Issue number | 1-2 |
DOIs | |
State | Published - Jun 1 2016 |
All Science Journal Classification (ASJC) codes
- General Decision Sciences
- Management Science and Operations Research