Systematic poisoning attacks on and defenses for machine learning in healthcare

Mehran Mozaffari-Kermani, Susmita Sur-Kolay, Anand Raghunathan, Niraj K. Jha

Research output: Contribution to journalArticlepeer-review

221 Scopus citations

Abstract

Machine learning is being used in a wide range of application domains to discover patterns in large datasets. Increasingly, the results of machine learning drive critical decisions in applications related to healthcare and biomedicine. Such healthrelated applications are often sensitive, and thus, any security breach would be catastrophic. Naturally, the integrity of the results computed by machine learning is of great importance. Recent research has shown that some machine-learning algorithms can be compromised by augmenting their training datasets with malicious data, leadingto anew class ofattacks called poisoning attacks. Hindrance of a diagnosis may have life-threatening consequences and could cause distrust. On the other hand, not only may a false diagnosis prompt users to distrust the machine-learning algorithm and even abandon the entire system but also such a false positive classification may cause patient distress. In this paper, we present a systematic, algorithm-independent approach for mounting poisoning attacks acrossawide rangeofmachine-learning algorithms and healthcare datasets. The proposed attack procedure generates input data, which, when added to the training set, can either cause the results of machine learning to have targeted errors (e.g., increase the likelihood of classification into a specific class), or simply introduce arbitrary errors (incorrect classification). These attacks may be applied to both fixed and evolving datasets. They can be applied even when only statistics of the training dataset are available or, in some cases, even without access to the training dataset, although at a lower efficacy. We establish the effectiveness of the proposed attacks using a suite of six machine-learning algorithms and five healthcare datasets. Finally, we present countermeasures against the proposed generic attacks that are based on tracking and detecting deviations in various accuracy metrics, and benchmark their effectiveness.

Original languageEnglish (US)
Article number6868201
Pages (from-to)1893-1905
Number of pages13
JournalIEEE Journal of Biomedical and Health Informatics
Volume19
Issue number6
DOIs
StatePublished - Nov 1 2015

All Science Journal Classification (ASJC) codes

  • Computer Science Applications
  • Health Informatics
  • Electrical and Electronic Engineering
  • Health Information Management

Keywords

  • Healthcare
  • Machine learning
  • Poisoning attacks
  • Security

Fingerprint

Dive into the research topics of 'Systematic poisoning attacks on and defenses for machine learning in healthcare'. Together they form a unique fingerprint.

Cite this