Abstract
In Part I of this tutorial, we provided a canonical modeling framework for sequential, stochastic optimization (control) problems. A major feature of this framework is a clear separation of the process of modeling a problem, versus the design of policies to solve the problem. In Part II, we provide additional discussion behind some of the more subtle concepts such as the construction of a state variable. We illustrate the modeling process using an energy storage problem. We then create five variations of this problem designed to bring out the features of the different policies. The first four of these problems demonstrate that each of the four classes of policies is best for particular problem characteristics. The fifth policy is a hybrid that illustrates the ability to combine the strengths of multiple policy classes.
Original language | English (US) |
---|---|
Article number | 7100937 |
Pages (from-to) | 1468-1475 |
Number of pages | 8 |
Journal | IEEE Transactions on Power Systems |
Volume | 31 |
Issue number | 2 |
DOIs | |
State | Published - Mar 2016 |
All Science Journal Classification (ASJC) codes
- Energy Engineering and Power Technology
- Electrical and Electronic Engineering
Keywords
- Approximate dynamic programming
- dynamic programming
- energy storage
- energy systems
- optimal control
- reinforcement learning
- robust optimization
- stochastic optimization
- stochastic programming