Abstract
Many sequential decision problems can be formulated as Markov decision processes (MDPs) where the optimal value function (or cost-to-go function) can be shown to satisfy a monotone structure in some or all of its dimensions. When the state space becomes large, traditional techniques, such as the backward dynamic programming algorithm (i.e., backward induction or value iteration), may no longer be effective in finding a solution within a reasonable time frame, and thus we are forced to consider other approaches, such as approximate dynamic programming (ADP). We propose a provably convergent ADP algorithm called Monotone-ADP that exploits the monotonicity of the value functions to increase the rate of convergence. In this paper, we describe a general finite-horizon problem setting where the optimal value function is monotone, present a convergence proof for Monotone-ADP under various technical assumptions, and show numerical results for three application domains: optimal stopping, energy storage/allocation, and glycemic control for diabetes patients. The empirical results indicate that by taking advantage of monotonicity, we can attain high quality solutions within a relatively small number of iterations, using up to two orders of magnitude less computation than is needed to compute the optimal solution exactly.
Original language | English (US) |
---|---|
Pages (from-to) | 1489-1511 |
Number of pages | 23 |
Journal | Operations Research |
Volume | 63 |
Issue number | 6 |
DOIs | |
State | Published - Nov 1 2015 |
All Science Journal Classification (ASJC) codes
- Computer Science Applications
- Management Science and Operations Research
Keywords
- Approximate Dynamic Programming
- Energy Storage
- Glycemic Control
- Monotonicity
- Optimal Stopping