TY - JOUR
T1 - Can we Generalize and Distribute Private Representation Learning?
AU - Azam, Sheikh Shams
AU - Kim, Taejin
AU - Hosseinalipour, Seyyedali
AU - Joe-Wong, Carlee
AU - Bagchi, Saurabh
AU - Brinton, Christopher
N1 - Funding Information:
This research was partially supported by the Northrop Grumman Cybersecurity Research Consortium (NGCRC), the Office of Naval Research (ONR) under grant N00014-21-1-2472, and the National Science Foundation (NSF) under grant CNS-2106891.
Publisher Copyright:
Copyright © 2022 by the author(s)
PY - 2022
Y1 - 2022
N2 - We study the problem of learning representations that are private yet informative, i.e., provide information about intended “ally” targets while hiding sensitive “adversary” attributes. We propose Exclusion-Inclusion Generative Adversarial Network (EIGAN), a generalized private representation learning (PRL) architecture that accounts for multiple ally and adversary attributes unlike existing PRL solutions. While centrally-aggregated dataset is a prerequisite for most PRL techniques, data in real-world is often siloed across multiple distributed nodes unwilling to share the raw data because of privacy concerns. We address this practical constraint by developing D-EIGAN, the first distributed PRL method that learns representations at each node without transmitting the source data. We theoretically analyze the behavior of adversaries under the optimal EIGAN and D-EIGAN encoders and the impact of dependencies among ally and adversary tasks on the optimization objective. Our experiments on various datasets demonstrate the advantages of EIGAN in terms of performance, robustness, and scalability. In particular, EIGAN outperforms the previous state-of-the-art by a significant accuracy margin (47% improvement), and D-EIGAN's performance is consistently on par with EIGAN under different network settings.
AB - We study the problem of learning representations that are private yet informative, i.e., provide information about intended “ally” targets while hiding sensitive “adversary” attributes. We propose Exclusion-Inclusion Generative Adversarial Network (EIGAN), a generalized private representation learning (PRL) architecture that accounts for multiple ally and adversary attributes unlike existing PRL solutions. While centrally-aggregated dataset is a prerequisite for most PRL techniques, data in real-world is often siloed across multiple distributed nodes unwilling to share the raw data because of privacy concerns. We address this practical constraint by developing D-EIGAN, the first distributed PRL method that learns representations at each node without transmitting the source data. We theoretically analyze the behavior of adversaries under the optimal EIGAN and D-EIGAN encoders and the impact of dependencies among ally and adversary tasks on the optimization objective. Our experiments on various datasets demonstrate the advantages of EIGAN in terms of performance, robustness, and scalability. In particular, EIGAN outperforms the previous state-of-the-art by a significant accuracy margin (47% improvement), and D-EIGAN's performance is consistently on par with EIGAN under different network settings.
UR - http://www.scopus.com/inward/record.url?scp=85163131164&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85163131164&partnerID=8YFLogxK
M3 - Conference article
AN - SCOPUS:85163131164
SN - 2640-3498
VL - 151
SP - 11320
EP - 11340
JO - Proceedings of Machine Learning Research
JF - Proceedings of Machine Learning Research
T2 - 25th International Conference on Artificial Intelligence and Statistics, AISTATS 2022
Y2 - 28 March 2022 through 30 March 2022
ER -