TY - JOUR
T1 - A Compressive Privacy approach to Generalized Information Bottleneck and Privacy Funnel problems
AU - Kung, S. Y.
N1 - Funding Information:
This material is based upon work supported in part by the Brandeis Program of the Defense Advanced Research Project Agency (DARPA) and Space and Naval Warfare System Center Pacific (SSC Pacific) under Contract No. 66001-15-C-4068. The author wishes to thank his colleagues for invaluable discussion and assistances, including Professors Morris J. Chang, Ying Cai, Peiyuan Wu, Yuan Zhou, Ying Li and, in particular, his Ph.D. students in the Princeton University: Thee Chanyaswad and Mert Al.
Funding Information:
This material is based upon work supported in part by the Brandeis Program of the Defense Advanced Research Project Agency (DARPA) and Space and Naval Warfare System Center Pacific (SSC Pacific) under Contract No. 66001-15-C-4068 . The author wishes to thank his colleagues for invaluable discussion and assistances, including Professors Morris J. Chang, Ying Cai, Peiyuan Wu, Yuan Zhou, Ying Li and, in particular, his Ph.D. students in the Princeton University: Thee Chanyaswad and Mert Al. Appendix
Publisher Copyright:
© 2017
PY - 2018/3
Y1 - 2018/3
N2 - This paper explores a Compressive Privacy (CP) methodology for optimal tradeoff between utility gain and privacy loss. CP represents a dimension-reduced subspace design of optimally desensitized query that may be safely shared with the public. Built upon the information and estimation theory, this paper proposes a “differential mutual information” (DMI) criterion to safeguard the privacy protection (PP). Algorithmically, DMI-optimal solutions can be derived via the Discriminant Component Analysis (DCA). Moreover, DCA has two machine learning variants (one in the original space and another is the kernel space) good for supervised learning applications. By extending the notion of DMI to the utility gain and privacy loss, CP unifies the conventional Information Bottleneck (IB) and Privacy Funnel (PF) and lead to two constrained optimizers, named Generalized Information Bottleneck (GIB) and Generalized Privacy Funnel (GPF). In the supervised learning environments, DCA can be further extended to a DUCA machine learning variant to reach an optimal tradeoff between utility gain and privacy loss. Finally, for fast convergence, a golden-section iterative method is developed particularly for solving the two constrained optimization problems: GIB and GPF.
AB - This paper explores a Compressive Privacy (CP) methodology for optimal tradeoff between utility gain and privacy loss. CP represents a dimension-reduced subspace design of optimally desensitized query that may be safely shared with the public. Built upon the information and estimation theory, this paper proposes a “differential mutual information” (DMI) criterion to safeguard the privacy protection (PP). Algorithmically, DMI-optimal solutions can be derived via the Discriminant Component Analysis (DCA). Moreover, DCA has two machine learning variants (one in the original space and another is the kernel space) good for supervised learning applications. By extending the notion of DMI to the utility gain and privacy loss, CP unifies the conventional Information Bottleneck (IB) and Privacy Funnel (PF) and lead to two constrained optimizers, named Generalized Information Bottleneck (GIB) and Generalized Privacy Funnel (GPF). In the supervised learning environments, DCA can be further extended to a DUCA machine learning variant to reach an optimal tradeoff between utility gain and privacy loss. Finally, for fast convergence, a golden-section iterative method is developed particularly for solving the two constrained optimization problems: GIB and GPF.
UR - http://www.scopus.com/inward/record.url?scp=85026385440&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85026385440&partnerID=8YFLogxK
U2 - 10.1016/j.jfranklin.2017.07.002
DO - 10.1016/j.jfranklin.2017.07.002
M3 - Article
AN - SCOPUS:85026385440
SN - 0016-0032
VL - 355
SP - 1846
EP - 1872
JO - Journal of the Franklin Institute
JF - Journal of the Franklin Institute
IS - 4
ER -