Systematic evaluation of privacy risks of machine learning models

Liwei Song, Prateek Mittal

Research output: Chapter in Book/Report/Conference proceedingConference contribution

154 Scopus citations

Abstract

Machine learning models are prone to memorizing sensitive data, making them vulnerable to membership inference attacks in which an adversary aims to guess if an input sample was used to train the model. In this paper, we show that prior work on membership inference attacks may severely underestimate the privacy risks by relying solely on training custom neural network classifiers to perform attacks and focusing only on the aggregate results over data samples, such as the attack accuracy. To overcome these limitations, we first propose to benchmark membership inference privacy risks by improving existing non-neural network based inference attacks and proposing a new inference attack method based on a modification of prediction entropy. We propose to supplement existing neural network based attacks with our proposed benchmark attacks to effectively measure the privacy risks. We also propose benchmarks for defense mechanisms by accounting for adaptive adversaries with knowledge of the defense and also accounting for the trade-off between model accuracy and privacy risks. Using our benchmark attacks, we demonstrate that existing defense approaches against membership inference attacks are not as effective as previously reported. Next, we introduce a new approach for fine-grained privacy analysis by formulating and deriving a new metric called the privacy risk score. Our privacy risk score metric measures an individual sample's likelihood of being a training member, which allows an adversary to identify samples with high privacy risks and perform membership inference attacks with high confidence. We propose to combine both existing aggregate privacy analysis and our proposed fine-grained privacy analysis for systematically measuring privacy risks. We experimentally validate the effectiveness of the privacy risk score metric and demonstrate that the distribution of privacy risk scores across individual samples is heterogeneous. Finally, we perform an in-depth investigation to understand why certain samples have high privacy risk scores, including correlations with model properties such as model sensitivity, generalization error, and feature embeddings. Our work emphasizes the importance of a systematic and rigorous evaluation of privacy risks of machine learning models. We publicly release our code at https://github.com/inspire-group/membership-inference-evaluation and our evaluation mechanisms have also been integrated in Google's TensorFlow Privacy library.

Original languageEnglish (US)
Title of host publicationProceedings of the 30th USENIX Security Symposium
PublisherUSENIX Association
Pages2615-2632
Number of pages18
ISBN (Electronic)9781939133243
StatePublished - 2021
Event30th USENIX Security Symposium, USENIX Security 2021 - Virtual, Online
Duration: Aug 11 2021Aug 13 2021

Publication series

NameProceedings of the 30th USENIX Security Symposium

Conference

Conference30th USENIX Security Symposium, USENIX Security 2021
CityVirtual, Online
Period8/11/218/13/21

All Science Journal Classification (ASJC) codes

  • Computer Networks and Communications
  • Information Systems
  • Safety, Risk, Reliability and Quality

Fingerprint

Dive into the research topics of 'Systematic evaluation of privacy risks of machine learning models'. Together they form a unique fingerprint.

Cite this