Reinforcement Learning-Based Microgrid Energy Trading With a Reduced Power Plant Schedule

Xiaozhen Lu, Xingyu Xiao, Liang Xiao, Canhuang Dai, Mugen Peng, H. Vincent Poor

Research output: Contribution to journalArticlepeer-review

88 Scopus citations

Abstract

With dynamic renewable energy generation and power demand, microgrids (MGs) exchange energy with each other to reduce their dependence on power plants. In this article, we present a reinforcement learning (RL)-based MG energy trading scheme to choose the electric energy trading policy according to the predicted future renewable energy generation, the estimated future power demand, and the MG battery level. This scheme designs a deep RL-based energy trading algorithm to address the supply-demand mismatch problem for a smart grid with a large number of MGs without relying on the renewable energy generation and power demand models of other MGs. A performance bound on the MG utility and dependence on the power plant is provided. Simulation results based on a smart grid with three MGs using wind speed data from Hong Kong Observation and electricity prices from ISO New England show that this scheme significantly reduces the average power plant schedule and thus increases the MG utility in comparison with a benchmark methodology.

Original languageEnglish (US)
Article number8839066
Pages (from-to)10728-10737
Number of pages10
JournalIEEE Internet of Things Journal
Volume6
Issue number6
DOIs
StatePublished - Dec 2019
Externally publishedYes

All Science Journal Classification (ASJC) codes

  • Signal Processing
  • Information Systems
  • Hardware and Architecture
  • Computer Science Applications
  • Computer Networks and Communications

Keywords

  • Energy trading
  • power plant schedule
  • reinforcement learning (RL)
  • smart grids

Fingerprint

Dive into the research topics of 'Reinforcement Learning-Based Microgrid Energy Trading With a Reduced Power Plant Schedule'. Together they form a unique fingerprint.

Cite this