Abstract
In this paper, we consider a finite-horizon Markov decision process (MDP) for which the objective at each stage is to minimize a quantile-based risk measure (QBRM) of the sequence of future costs; we call the overall objective a dynamic quantile-based risk measure (DQBRM). In particular, we consider optimizing dynamic risk measures where the one-step risk measures are QBRMs, a class of risk measures that includes the popular value at risk (VaR) and the conditional value at risk (CVaR). Although there is considerable theoretical development of risk-averse MDPs in the literature, the computational challenges have not been explored as thoroughly. We propose data-driven and simulation-based approximate dynamic programming (ADP) algorithms to solve the risk-averse sequential decision problem. We address the issue of ine cient sampling for risk applications in simulated settings and present a procedure, based on importance sampling, to direct samples toward the “risky region” as the ADP algorithm progresses. Finally, we show numerical results of our algorithms in the context of an application involving risk-averse bidding for energy storage.
Original language | English (US) |
---|---|
Pages (from-to) | 554-579 |
Number of pages | 26 |
Journal | Mathematics of Operations Research |
Volume | 43 |
Issue number | 2 |
DOIs | |
State | Published - May 2018 |
All Science Journal Classification (ASJC) codes
- General Mathematics
- Computer Science Applications
- Management Science and Operations Research
Keywords
- Approximate dynamic programming
- Dynamic risk measures
- Energy trading
- Q-learning
- Reinforcement learning