Risk-averse approximate dynamic programming with quantile-based risk measures

Daniel R. Jiang, Warren Buckler Powell

Research output: Contribution to journalArticlepeer-review

22 Scopus citations

Abstract

In this paper, we consider a finite-horizon Markov decision process (MDP) for which the objective at each stage is to minimize a quantile-based risk measure (QBRM) of the sequence of future costs; we call the overall objective a dynamic quantile-based risk measure (DQBRM). In particular, we consider optimizing dynamic risk measures where the one-step risk measures are QBRMs, a class of risk measures that includes the popular value at risk (VaR) and the conditional value at risk (CVaR). Although there is considerable theoretical development of risk-averse MDPs in the literature, the computational challenges have not been explored as thoroughly. We propose data-driven and simulation-based approximate dynamic programming (ADP) algorithms to solve the risk-averse sequential decision problem. We address the issue of ine cient sampling for risk applications in simulated settings and present a procedure, based on importance sampling, to direct samples toward the “risky region” as the ADP algorithm progresses. Finally, we show numerical results of our algorithms in the context of an application involving risk-averse bidding for energy storage.

Original languageEnglish (US)
Pages (from-to)554-579
Number of pages26
JournalMathematics of Operations Research
Volume43
Issue number2
DOIs
StatePublished - May 2018

All Science Journal Classification (ASJC) codes

  • General Mathematics
  • Computer Science Applications
  • Management Science and Operations Research

Keywords

  • Approximate dynamic programming
  • Dynamic risk measures
  • Energy trading
  • Q-learning
  • Reinforcement learning

Fingerprint

Dive into the research topics of 'Risk-averse approximate dynamic programming with quantile-based risk measures'. Together they form a unique fingerprint.

Cite this