Abstract
Owing to the low communication costs and privacy-promoting capabilities, federated learning (FL) has become a promising tool for training effective machine learning models among distributed clients. However, with the distributed architecture, low-quality models could be uploaded to the aggregator server by unreliable clients, leading to a degradation or even a collapse of training. In this article, we model these unreliable behaviors of clients and propose a defensive mechanism to mitigate such a security risk. Specifically, we first investigate the impact on the models caused by unreliable clients by deriving a convergence upper bound on the loss function based on the gradient descent updates. Our bounds reveal that with a fixed amount of total computational resources, there exists an optimal number of local training iterations in terms of convergence performance. We further design a novel defensive mechanism, named deep neural network-based secure aggregation (DeepSA). Our experimental results validate our theoretical analysis. In addition, the effectiveness of DeepSA is verified by comparing with other state-of-the-art defensive mechanisms.
Original language | English (US) |
---|---|
Pages (from-to) | 17308-17319 |
Number of pages | 12 |
Journal | IEEE Internet of Things Journal |
Volume | 8 |
Issue number | 24 |
DOIs | |
State | Published - Dec 15 2021 |
Externally published | Yes |
All Science Journal Classification (ASJC) codes
- Signal Processing
- Information Systems
- Hardware and Architecture
- Computer Science Applications
- Computer Networks and Communications
Keywords
- Convergence bound
- defensive mechanism
- federated learning (FL)
- unreliable clients