TY - JOUR
T1 - Revisiting Analog Over-the-Air Machine Learning
T2 - The Blessing and Curse of Interference
AU - Yang, Howard H.
AU - Chen, Zihan
AU - Quek, Tony Q.S.
AU - Poor, H. Vincent
N1 - Funding Information:
The idea for the parameter-tied factorial HMM arose during a discussion with Dr. Mark Gales; the authors would also like to thank Professor Steve Young, Dr. Martin Russell, members of the SSLI Lab at the University of Washington and two anonymous reviewers for their assistance. This work was supported by DARPA Grant No. N660019928924.
Publisher Copyright:
© 2007-2012 IEEE.
PY - 2022/4/1
Y1 - 2022/4/1
N2 - We study a distributed machine learning problem carried out by an edge server and multiple agents in a wireless network. The objective is to minimize a global function that is a sum of the agents' local loss functions. And the optimization is conducted by analog over-the-air model training. Specifically, each agent modulates its local gradient onto a set of waveforms and transmits to the edge server simultaneously. From the received analog signal the edge server extracts a noisy aggregated gradient which is distorted by the channel fading and interference, and uses it to update the global model and feedbacks to all the agents for another round of local computing. Since the electromagnetic interference generally exhibits a heavy-tailed intrinsic, we use the alpha-stable distribution to model its statistic. In consequence, the global gradient has an infinite variance that hinders the use of conventional techniques for convergence analysis that rely on second-order moments' existence. To circumvent this challenge, we take a new route to establish the analysis of convergence rate, as well as generalization error, of the algorithm. We also show that the training algorithm can be run in tandem with the momentum scheme to accelerate the convergence. Our analyses reveal a two-sided effect of the interference on the overall training procedure. On the negative side, heavy tail noise slows down the convergence rate of the model training: the heavier the tail in the distribution of interference, the slower the algorithm converges. On the positive side, heavy tail noise has the potential to increase the generalization power of the trained model: the heavier the tail, the better the model generalizes. This perhaps counterintuitive conclusion implies that the prevailing thinking on interference - that it is only detrimental to the edge learning system - is outdated and we shall seek new techniques that exploit, rather than simply mitigate, the interference for better machine learning in wireless networks.
AB - We study a distributed machine learning problem carried out by an edge server and multiple agents in a wireless network. The objective is to minimize a global function that is a sum of the agents' local loss functions. And the optimization is conducted by analog over-the-air model training. Specifically, each agent modulates its local gradient onto a set of waveforms and transmits to the edge server simultaneously. From the received analog signal the edge server extracts a noisy aggregated gradient which is distorted by the channel fading and interference, and uses it to update the global model and feedbacks to all the agents for another round of local computing. Since the electromagnetic interference generally exhibits a heavy-tailed intrinsic, we use the alpha-stable distribution to model its statistic. In consequence, the global gradient has an infinite variance that hinders the use of conventional techniques for convergence analysis that rely on second-order moments' existence. To circumvent this challenge, we take a new route to establish the analysis of convergence rate, as well as generalization error, of the algorithm. We also show that the training algorithm can be run in tandem with the momentum scheme to accelerate the convergence. Our analyses reveal a two-sided effect of the interference on the overall training procedure. On the negative side, heavy tail noise slows down the convergence rate of the model training: the heavier the tail in the distribution of interference, the slower the algorithm converges. On the positive side, heavy tail noise has the potential to increase the generalization power of the trained model: the heavier the tail, the better the model generalizes. This perhaps counterintuitive conclusion implies that the prevailing thinking on interference - that it is only detrimental to the edge learning system - is outdated and we shall seek new techniques that exploit, rather than simply mitigate, the interference for better machine learning in wireless networks.
KW - Distributed machine learning
KW - analog over-the-air computing
KW - convergence rate
KW - generalization error
KW - heavy-tailed interference
UR - http://www.scopus.com/inward/record.url?scp=85113881054&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85113881054&partnerID=8YFLogxK
U2 - 10.1109/JSTSP.2021.3139231
DO - 10.1109/JSTSP.2021.3139231
M3 - Article
AN - SCOPUS:85113881054
SN - 1932-4553
VL - 16
SP - 406
EP - 419
JO - IEEE Journal on Selected Topics in Signal Processing
JF - IEEE Journal on Selected Topics in Signal Processing
IS - 3
ER -