Abstract

Gradient inversion attack (or input recovery from gradient) is an emerging threat to the security and privacy preservation of federated learning, whereby malicious eavesdroppers or participants in the protocol can partially recover the clients' private data. This chapter summarizes existing attacks in federated learning, and investigates their potential limitations. The chapter then presents an evaluation of the benefits of several proposed defense mechanisms against gradient inversion attacks to demonstrate their trade-offs of privacy leakage and data utility. The chapter concludes with several open problems for further research.

Original languageEnglish (US)
Title of host publicationFederated Learning
Subtitle of host publicationTheory and Practice
PublisherElsevier
Pages105-122
Number of pages18
ISBN (Electronic)9780443190377
ISBN (Print)9780443190384
DOIs
StatePublished - Jan 1 2024

All Science Journal Classification (ASJC) codes

  • General Computer Science

Keywords

  • Federated learning
  • Privacy
  • Reconstruction attack

Fingerprint

Dive into the research topics of 'Evaluating gradient inversion attacks and defenses'. Together they form a unique fingerprint.

Cite this