Abstract
Gradient inversion attack (or input recovery from gradient) is an emerging threat to the security and privacy preservation of federated learning, whereby malicious eavesdroppers or participants in the protocol can partially recover the clients' private data. This chapter summarizes existing attacks in federated learning, and investigates their potential limitations. The chapter then presents an evaluation of the benefits of several proposed defense mechanisms against gradient inversion attacks to demonstrate their trade-offs of privacy leakage and data utility. The chapter concludes with several open problems for further research.
Original language | English (US) |
---|---|
Title of host publication | Federated Learning |
Subtitle of host publication | Theory and Practice |
Publisher | Elsevier |
Pages | 105-122 |
Number of pages | 18 |
ISBN (Electronic) | 9780443190377 |
ISBN (Print) | 9780443190384 |
DOIs | |
State | Published - Jan 1 2024 |
All Science Journal Classification (ASJC) codes
- General Computer Science
Keywords
- Federated learning
- Privacy
- Reconstruction attack