TY - JOUR
T1 - Towards Communication-efficient Federated Learning via Sparse and Aligned Adaptive Optimization
AU - Deng, Xiumei
AU - Li, Jun
AU - Wei, Kang
AU - Shi, Long
AU - Xiong, Zehui
AU - Ding, Ming
AU - Chen, Wen
AU - Jin, Shi
AU - Poor, Vincent V.
N1 - Publisher Copyright:
© 1991-2012 IEEE.
PY - 2025
Y1 - 2025
N2 - Adaptive moment estimation (Adam), as a Stochastic Gradient Descent (SGD) variant, has gained widespread popularity in federated learning (FL) due to its fast convergence. However, federated Adam (FedAdam) algorithms suffer from a threefold increase in uplink communication overhead compared to federated SGD (FedSGD) algorithms, which arises from the necessity to transmit both local model updates and first and second moment estimates from distributed devices to the centralized server for aggregation. Driven by this issue, we propose a novel sparse FedAdam algorithm called FedAdam-SSM, wherein distributed devices sparsify the updates of local model parameters and moment estimates and subsequently upload the sparse representations to the centralized server. To further reduce the communication overhead, the updates of local model parameters and moment estimates incorporate a shared sparse mask (SSM) into the sparsification process, eliminating the need for three separate sparse masks. Theoretically, we develop an upper bound on the deviation between the local model trained by FedAdam-SSM and the target model trained by centralized Adam, which is related to sparsification error and imbalanced data distribution. By minimizing the deviation bound between the model trained by FedAdam-SSM and centralized Adam, we optimize the SSM to mitigate the learning performance degradation caused by sparsification error. Additionally, we provide convergence bounds for FedAdam-SSM in both convex and non-convex objective function settings, and investigate the impact of local epoch, learning rate and sparsification ratio on the convergence rate of FedAdam-SSM. Experimental results show that FedAdam-SSM outperforms baselines in terms of convergence rate (over 1.1× faster than the sparse FedAdam baselines) and test accuracy (over 14.5% ahead of the quantized FedAdam baselines).
AB - Adaptive moment estimation (Adam), as a Stochastic Gradient Descent (SGD) variant, has gained widespread popularity in federated learning (FL) due to its fast convergence. However, federated Adam (FedAdam) algorithms suffer from a threefold increase in uplink communication overhead compared to federated SGD (FedSGD) algorithms, which arises from the necessity to transmit both local model updates and first and second moment estimates from distributed devices to the centralized server for aggregation. Driven by this issue, we propose a novel sparse FedAdam algorithm called FedAdam-SSM, wherein distributed devices sparsify the updates of local model parameters and moment estimates and subsequently upload the sparse representations to the centralized server. To further reduce the communication overhead, the updates of local model parameters and moment estimates incorporate a shared sparse mask (SSM) into the sparsification process, eliminating the need for three separate sparse masks. Theoretically, we develop an upper bound on the deviation between the local model trained by FedAdam-SSM and the target model trained by centralized Adam, which is related to sparsification error and imbalanced data distribution. By minimizing the deviation bound between the model trained by FedAdam-SSM and centralized Adam, we optimize the SSM to mitigate the learning performance degradation caused by sparsification error. Additionally, we provide convergence bounds for FedAdam-SSM in both convex and non-convex objective function settings, and investigate the impact of local epoch, learning rate and sparsification ratio on the convergence rate of FedAdam-SSM. Experimental results show that FedAdam-SSM outperforms baselines in terms of convergence rate (over 1.1× faster than the sparse FedAdam baselines) and test accuracy (over 14.5% ahead of the quantized FedAdam baselines).
KW - Adam Optimizer
KW - Federated Learning
KW - Sparsification method
UR - https://www.scopus.com/pages/publications/105015804271
UR - https://www.scopus.com/inward/citedby.url?scp=105015804271&partnerID=8YFLogxK
U2 - 10.1109/TSP.2025.3608715
DO - 10.1109/TSP.2025.3608715
M3 - Article
AN - SCOPUS:105015804271
SN - 1053-587X
JO - IEEE Transactions on Signal Processing
JF - IEEE Transactions on Signal Processing
ER -