TY - GEN
T1 - Random Orthogonalization for Private Wireless Federated Learning
AU - Ul Zuhra, Sadaf
AU - Seif, Mohamed
AU - Banawan, Karim
AU - Poor, H. Vincent
N1 - Publisher Copyright:
© 2023 IEEE.
PY - 2023
Y1 - 2023
N2 - We consider the problem of private wireless feder-ated learning through a massive MIMO multiple-access channel (MAC). In this problem, a parameter server (PS) having M antennas needs to train a global machine learning model with the aid of K single-antenna users. Each user trains a local model to update the PS's global model without leaking information about the user's local model. By harnessing the additive nature of the MAC, the PS aggregates the local updates and updates the global model. We show that by adopting the random orthogonalization technique and careful noise injection by the users, maintaining the privacy of local models is possible under local differential privacy metrics without sacrificing the accuracy/convergence rate of the global machine learning model. We derive the exact achievable privacy level. Our results show that the privacy level is a function of the channel gains. We substantiate our findings by carrying out a standard classification task, which achieves an accuracy of 89% in less than 15 communication rounds while maintaining an acceptable privacy level of the users' local models. Moreover, numerical results show that the privacy leakage is decreasing in the number of users K, while it is increasing in the number of antennas at the PS M.
AB - We consider the problem of private wireless feder-ated learning through a massive MIMO multiple-access channel (MAC). In this problem, a parameter server (PS) having M antennas needs to train a global machine learning model with the aid of K single-antenna users. Each user trains a local model to update the PS's global model without leaking information about the user's local model. By harnessing the additive nature of the MAC, the PS aggregates the local updates and updates the global model. We show that by adopting the random orthogonalization technique and careful noise injection by the users, maintaining the privacy of local models is possible under local differential privacy metrics without sacrificing the accuracy/convergence rate of the global machine learning model. We derive the exact achievable privacy level. Our results show that the privacy level is a function of the channel gains. We substantiate our findings by carrying out a standard classification task, which achieves an accuracy of 89% in less than 15 communication rounds while maintaining an acceptable privacy level of the users' local models. Moreover, numerical results show that the privacy leakage is decreasing in the number of users K, while it is increasing in the number of antennas at the PS M.
UR - http://www.scopus.com/inward/record.url?scp=85190376776&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85190376776&partnerID=8YFLogxK
U2 - 10.1109/IEEECONF59524.2023.10476928
DO - 10.1109/IEEECONF59524.2023.10476928
M3 - Conference contribution
AN - SCOPUS:85190376776
T3 - Conference Record - Asilomar Conference on Signals, Systems and Computers
SP - 233
EP - 236
BT - Conference Record of the 57th Asilomar Conference on Signals, Systems and Computers, ACSSC 2023
A2 - Matthews, Michael B.
PB - IEEE Computer Society
T2 - 57th Asilomar Conference on Signals, Systems and Computers, ACSSC 2023
Y2 - 29 October 2023 through 1 November 2023
ER -