TY - JOUR
T1 - SparseFed
T2 - 25th International Conference on Artificial Intelligence and Statistics, AISTATS 2022
AU - Panda, Ashwinee
AU - Mahloujifar, Saeed
AU - Bhagoji, Arjun N.
AU - Chakraborty, Supriyo
AU - Mittal, Prateek
N1 - Funding Information:
Acknowledgements: This work was supported in
Funding Information:
This work was supported in part by the National Science Foundation under grant CNS-1553437, Department of Energy's EUREICA grant DE-OE0000920, the ARL's Army Artificial Intelligence Innovation Institute (A2I2), Schmidt DataX award, and Princeton E-ffiliates Award. The work done at IBM research was sponsored by the Combat Capabilities Development Command Army Research Laboratory and was accomplished under Cooperative Agreement Number W911NF-13-2-0045 (ARL Cyber Security CRA). The views and conclusions contained in this document are those of the authors and should not be interpreted as representing the official policies, either expressed or implied, of the Combat Capabilities Development Command Army Research Laboratory or the U.S. Government. The U.S. Government is authorized to reproduce and distribute reprints for Government purposes notwithstanding any copyright notation here on.
Publisher Copyright:
Copyright © 2022 by the author(s)
PY - 2022
Y1 - 2022
N2 - Federated learning is inherently vulnerable to model poisoning attacks because its decentralized nature allows attackers to participate with compromised devices. In model poisoning attacks, the attacker reduces the model's performance on targeted sub-tasks (e.g. classifying planes as birds) by uploading "poisoned" updates. In this report we introduce SparseFed, a novel defense that uses global top-k update sparsification and device-level gradient clipping to mitigate model poisoning attacks. We propose a theoretical framework for analyzing the robustness of defenses against poisoning attacks, and provide robustness and convergence analysis of our algorithm. To validate its empirical efficacy we conduct an open-source evaluation at scale across multiple benchmark datasets for computer vision and federated learning.
AB - Federated learning is inherently vulnerable to model poisoning attacks because its decentralized nature allows attackers to participate with compromised devices. In model poisoning attacks, the attacker reduces the model's performance on targeted sub-tasks (e.g. classifying planes as birds) by uploading "poisoned" updates. In this report we introduce SparseFed, a novel defense that uses global top-k update sparsification and device-level gradient clipping to mitigate model poisoning attacks. We propose a theoretical framework for analyzing the robustness of defenses against poisoning attacks, and provide robustness and convergence analysis of our algorithm. To validate its empirical efficacy we conduct an open-source evaluation at scale across multiple benchmark datasets for computer vision and federated learning.
UR - http://www.scopus.com/inward/record.url?scp=85163125046&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85163125046&partnerID=8YFLogxK
M3 - Conference article
AN - SCOPUS:85163125046
SN - 2640-3498
VL - 151
SP - 7587
EP - 7624
JO - Proceedings of Machine Learning Research
JF - Proceedings of Machine Learning Research
Y2 - 28 March 2022 through 30 March 2022
ER -