TY - GEN
T1 - Model inversion attacks against collaborative inference
AU - He, Zecheng
AU - Zhang, Tianwei
AU - Lee, Ruby B.
N1 - Publisher Copyright:
© 2019 Association for Computing Machinery.
PY - 2019/12/9
Y1 - 2019/12/9
N2 - The prevalence of deep learning has drawn attention to the privacy protection of sensitive data. Various privacy threats have been presented, where an adversary can steal model owners’ private data. Meanwhile, countermeasures have also been introduced to achieve privacy-preserving deep learning. However, most studies only focused on data privacy during training, and ignored privacy during inference. In this paper, we devise a new set of attacks to compromise the inference data privacy in collaborative deep learning systems. Specifically, when a deep neural network and the corresponding inference task are split and distributed to different participants, one malicious participant can accurately recover an arbitrary input fed into this system, even if he has no access to other participants’ data or computations, or to prediction APIs to query this system. We evaluate our attacks under different settings, models and datasets, to show their effectiveness and generalization. We also study the characteristics of deep learning models that make them susceptible to such inference privacy threats. This provides insights and guidelines to develop more privacy-preserving collaborative systems and algorithms.
AB - The prevalence of deep learning has drawn attention to the privacy protection of sensitive data. Various privacy threats have been presented, where an adversary can steal model owners’ private data. Meanwhile, countermeasures have also been introduced to achieve privacy-preserving deep learning. However, most studies only focused on data privacy during training, and ignored privacy during inference. In this paper, we devise a new set of attacks to compromise the inference data privacy in collaborative deep learning systems. Specifically, when a deep neural network and the corresponding inference task are split and distributed to different participants, one malicious participant can accurately recover an arbitrary input fed into this system, even if he has no access to other participants’ data or computations, or to prediction APIs to query this system. We evaluate our attacks under different settings, models and datasets, to show their effectiveness and generalization. We also study the characteristics of deep learning models that make them susceptible to such inference privacy threats. This provides insights and guidelines to develop more privacy-preserving collaborative systems and algorithms.
KW - Deep Neural Network
KW - Distributed Computation
KW - Model Inversion Attack
UR - http://www.scopus.com/inward/record.url?scp=85077819528&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85077819528&partnerID=8YFLogxK
U2 - 10.1145/3359789.3359824
DO - 10.1145/3359789.3359824
M3 - Conference contribution
AN - SCOPUS:85077819528
T3 - ACM International Conference Proceeding Series
SP - 148
EP - 162
BT - Proceedings - 35th Annual Computer Security Applications Conference, ACSAC 2019
PB - Association for Computing Machinery
T2 - 35th Annual Computer Security Applications Conference, ACSAC 2019
Y2 - 9 December 2019 through 13 December 2019
ER -