TY - JOUR
T1 - Federated Learning with Unreliable Clients
T2 - Performance Analysis and Mechanism Design
AU - Ma, Chuan
AU - Li, Jun
AU - Ding, Ming
AU - Wei, Kang
AU - Chen, Wen
AU - Poor, H. Vincent
N1 - Funding Information:
This work was supported in part by the National Natural Science Foundation of China under Grant 61872184, Grant 62002170, and Grant 62071296; in part by the National Key Project under Grant 2020YFB1807700 and Grant 2018YFB1801102; in part by the Sciences and Technology Commission of Shanghai (STCSM) under Grant 20JC1416502; and in part by the U.S. National Science Foundation under Grant CCF-1908308.
Publisher Copyright:
© 2014 IEEE.
PY - 2021/12/15
Y1 - 2021/12/15
N2 - Owing to the low communication costs and privacy-promoting capabilities, federated learning (FL) has become a promising tool for training effective machine learning models among distributed clients. However, with the distributed architecture, low-quality models could be uploaded to the aggregator server by unreliable clients, leading to a degradation or even a collapse of training. In this article, we model these unreliable behaviors of clients and propose a defensive mechanism to mitigate such a security risk. Specifically, we first investigate the impact on the models caused by unreliable clients by deriving a convergence upper bound on the loss function based on the gradient descent updates. Our bounds reveal that with a fixed amount of total computational resources, there exists an optimal number of local training iterations in terms of convergence performance. We further design a novel defensive mechanism, named deep neural network-based secure aggregation (DeepSA). Our experimental results validate our theoretical analysis. In addition, the effectiveness of DeepSA is verified by comparing with other state-of-the-art defensive mechanisms.
AB - Owing to the low communication costs and privacy-promoting capabilities, federated learning (FL) has become a promising tool for training effective machine learning models among distributed clients. However, with the distributed architecture, low-quality models could be uploaded to the aggregator server by unreliable clients, leading to a degradation or even a collapse of training. In this article, we model these unreliable behaviors of clients and propose a defensive mechanism to mitigate such a security risk. Specifically, we first investigate the impact on the models caused by unreliable clients by deriving a convergence upper bound on the loss function based on the gradient descent updates. Our bounds reveal that with a fixed amount of total computational resources, there exists an optimal number of local training iterations in terms of convergence performance. We further design a novel defensive mechanism, named deep neural network-based secure aggregation (DeepSA). Our experimental results validate our theoretical analysis. In addition, the effectiveness of DeepSA is verified by comparing with other state-of-the-art defensive mechanisms.
KW - Convergence bound
KW - defensive mechanism
KW - federated learning (FL)
KW - unreliable clients
UR - http://www.scopus.com/inward/record.url?scp=85105852042&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85105852042&partnerID=8YFLogxK
U2 - 10.1109/JIOT.2021.3079472
DO - 10.1109/JIOT.2021.3079472
M3 - Article
AN - SCOPUS:85105852042
SN - 2327-4662
VL - 8
SP - 17308
EP - 17319
JO - IEEE Internet of Things Journal
JF - IEEE Internet of Things Journal
IS - 24
ER -