Bayesian exploration for approximate dynamic programming

Ilya O. Ryzhov, Martijn R.K. Mes, Warren Buckler Powell, Gerald van den Berg

Research output: Contribution to journalArticle

Abstract

Approximate dynamic programming (ADP) is a general methodological framework for multistage stochastic optimization problems in transportation, finance, energy, and other domains. We propose a new approach to the exploration/exploitation dilemma in ADP that leverages two important concepts from the optimal learning literature: first, we show how a Bayesian belief structure can be used to express uncertainty about the value function in ADP; second, we develop a new exploration strategy based on the concept of value of information and prove that it systematically explores the state space. An important advantage of our framework is that it can be integrated into both parametric and nonparametric value function approximations, which are widely used in practical implementations of ADP. We evaluate this strategy on a variety of distinct resource allocation problems and demonstrate that, although more computationally intensive, it is highly competitive against other exploration strategies.

Original languageEnglish (US)
Pages (from-to)198-214
Number of pages17
JournalOperations Research
Volume67
Issue number1
DOIs
StatePublished - Jan 2019

All Science Journal Classification (ASJC) codes

  • Computer Science Applications
  • Management Science and Operations Research

Keywords

  • Approximate dynamic programming
  • Bayesian learning
  • Correlated beliefs
  • Optimal learning
  • Value of information

Fingerprint Dive into the research topics of 'Bayesian exploration for approximate dynamic programming'. Together they form a unique fingerprint.

  • Cite this

    Ryzhov, I. O., Mes, M. R. K., Powell, W. B., & van den Berg, G. (2019). Bayesian exploration for approximate dynamic programming. Operations Research, 67(1), 198-214. https://doi.org/10.1287/opre.2018.1772