Abstract
Cooperative multiaccess edge computing (MEC) is a promising paradigm for the next-generation mobile networks. However, when the number of users explodes, the computational complexity of the existing optimization or learning-based task placement approaches in the cooperative MEC can increase significantly, which leads to intolerable MEC decision-making delay. In this article, we propose a mean field game (MFG) guided deep reinforcement learning (DRL) approach for the task placement in the cooperative MEC, which can help servers make timely task placement decisions, and significantly reduce average service delay. Instead of applying MFG or DRL separately, we jointly leverage MFG and DRL for task placement, and let the equilibrium of MFG guide the learning directions of DRL. We also ensure that the MFG and DRL approaches are consistent with the same goal. Specifically, we novelly define a mean field guided Q-value (MFG-Q), which is an estimation of the Q-value with the Nash equilibrium gained by MFG. We evaluate the proposed method's performance using real-world user distribution. Through extensive simulations, we show that the proposed scheme is effective in making timely decisions and reducing the average service delay. Besides, the convergence rates of our proposed method outperform the pure DR-based approaches.
Original language | English (US) |
---|---|
Article number | 9049116 |
Pages (from-to) | 9330-9340 |
Number of pages | 11 |
Journal | IEEE Internet of Things Journal |
Volume | 7 |
Issue number | 10 |
DOIs | |
State | Published - Oct 2020 |
Externally published | Yes |
All Science Journal Classification (ASJC) codes
- Signal Processing
- Information Systems
- Hardware and Architecture
- Computer Science Applications
- Computer Networks and Communications
Keywords
- Deep reinforcement learning (DRL)
- mean field game (MFG)
- multiaccess edge computing (MEC)
- task placement