TY - JOUR
T1 - Decision Transformers For Wireless Communications
T2 - A New Paradigm Of Resource Management
AU - Zhang, Jie
AU - Li, Jun
AU - Wang, Zhe
AU - Shi, Long
AU - Jin, Shi
AU - Chen, Wen
AU - Poor, H. Vincent
N1 - Publisher Copyright:
© 2002-2012 IEEE.
PY - 2025/4
Y1 - 2025/4
N2 - As the next generation of mobile systems evolves, artificial intelligence (AI) is expected to deeply integrate with wireless communications for resource management in variable environments. In particular, deep reinforcement learning (DRL) is an important tool for addressing stochastic optimization issues of resource allocation. However, DRL has to start each new training process from the beginning once the state and action spaces change, causing low sample efficiency and poor generalization ability. Moreover, each DRL training process may take a large number of epochs to converge, which is unacceptable for time-sensitive scenarios. In this article, we adopt an alternative AI technology, namely, decision transformer (DT), and propose a DT-based adaptive decision architecture for wireless resource management. This architecture innovates by constructing pre-trained models in the cloud and then fine-tuning personalized models at the edges. By leveraging the power of DT models learned over offline datasets, the proposed architecture is expected to achieve rapid convergence with many fewer training epochs and higher performance in new scenarios with different state and action spaces compared with DRL. We then design DT frameworks for two typical communication scenarios: intelligent reflecting sur-faces-aided communications and unmanned aerial vehicle-aided mobile edge computing. Simulations demonstrate that the proposed DT frameworks achieve over 3–6 times speedup in convergence and better performance relative to the classic DRL method, namely, proximal policy optimization.
AB - As the next generation of mobile systems evolves, artificial intelligence (AI) is expected to deeply integrate with wireless communications for resource management in variable environments. In particular, deep reinforcement learning (DRL) is an important tool for addressing stochastic optimization issues of resource allocation. However, DRL has to start each new training process from the beginning once the state and action spaces change, causing low sample efficiency and poor generalization ability. Moreover, each DRL training process may take a large number of epochs to converge, which is unacceptable for time-sensitive scenarios. In this article, we adopt an alternative AI technology, namely, decision transformer (DT), and propose a DT-based adaptive decision architecture for wireless resource management. This architecture innovates by constructing pre-trained models in the cloud and then fine-tuning personalized models at the edges. By leveraging the power of DT models learned over offline datasets, the proposed architecture is expected to achieve rapid convergence with many fewer training epochs and higher performance in new scenarios with different state and action spaces compared with DRL. We then design DT frameworks for two typical communication scenarios: intelligent reflecting sur-faces-aided communications and unmanned aerial vehicle-aided mobile edge computing. Simulations demonstrate that the proposed DT frameworks achieve over 3–6 times speedup in convergence and better performance relative to the classic DRL method, namely, proximal policy optimization.
UR - https://www.scopus.com/pages/publications/105003646007
UR - https://www.scopus.com/inward/citedby.url?scp=105003646007&partnerID=8YFLogxK
U2 - 10.1109/MWC.007.2400124
DO - 10.1109/MWC.007.2400124
M3 - Article
AN - SCOPUS:105003646007
SN - 1536-1284
VL - 32
SP - 180
EP - 186
JO - IEEE Wireless Communications
JF - IEEE Wireless Communications
IS - 2
ER -