Covert Model Poisoning Against Federated Learning: Algorithm Design and Optimization

Kang Wei, Jun Li, Ming Ding, Chuan Ma, Yo Seb Jeon, H. Vincent Poor

Research output: Contribution to journalArticlepeer-review

5 Scopus citations

Abstract

Federated learning (FL), as a type of distributed machine learning, is vulnerable to external attacks during parameter transmissions between learning agents and a model aggregator. In particular, malicious participant clients in FL can purposefully craft their uploaded model parameters to manipulate system outputs, which is know as a model poisoning (MP) attack. In this paper, we propose effective MP algorithms to attack the classical defensive aggregation Krum at the aggregator. The proposed algorithms are designed to evade detection, i.e., covert MP (CMP). Specifically, we first formulate the MP as an optimization problem by minimizing the Euclidean distance between the manipulated model and designated one, constrained by Krum. Then, we develop CMP algorithms against Krum based on the solutions of this optimization problem. Furthermore, to reduce the optimization complexity, we propose low complexity CMP algorithms having only a slight performance degradation. Our experimental results demonstrate that the proposed CMP algorithms are effective and can substantially outperform existing attack mechanisms, such as Arjun's attack and the label flipping attack. More specifically, our original CMP can achieve a high rate of the attacker's accuracy (\approx 90\%≈90%). For example, in our experiments using the MNIST dataset, the proposed CMP attacking algorithm against Krum can successfully manipulate the aggregated model to incorrectly classify a given digit as a different one (e.g., 9 as 8). Meanwhile, our CMP algorithm with an approximated constraint can achieve a rate of 87% in terms of the attacker's accuracy (attacker-desired results), with a 73% complexity reduction compared to the original CMP.

Original languageEnglish (US)
Pages (from-to)1196-1209
Number of pages14
JournalIEEE Transactions on Dependable and Secure Computing
Volume21
Issue number3
DOIs
StatePublished - May 1 2024
Externally publishedYes

All Science Journal Classification (ASJC) codes

  • General Computer Science
  • Electrical and Electronic Engineering

Keywords

  • Federated learning
  • model poisoning attack
  • robust aggregation

Fingerprint

Dive into the research topics of 'Covert Model Poisoning Against Federated Learning: Algorithm Design and Optimization'. Together they form a unique fingerprint.

Cite this