Abstract
This paper explores a Compressive Privacy (CP) methodology for optimal tradeoff between utility gain and privacy loss. CP represents a dimension-reduced subspace design of optimally desensitized query that may be safely shared with the public. Built upon the information and estimation theory, this paper proposes a “differential mutual information” (DMI) criterion to safeguard the privacy protection (PP). Algorithmically, DMI-optimal solutions can be derived via the Discriminant Component Analysis (DCA). Moreover, DCA has two machine learning variants (one in the original space and another is the kernel space) good for supervised learning applications. By extending the notion of DMI to the utility gain and privacy loss, CP unifies the conventional Information Bottleneck (IB) and Privacy Funnel (PF) and lead to two constrained optimizers, named Generalized Information Bottleneck (GIB) and Generalized Privacy Funnel (GPF). In the supervised learning environments, DCA can be further extended to a DUCA machine learning variant to reach an optimal tradeoff between utility gain and privacy loss. Finally, for fast convergence, a golden-section iterative method is developed particularly for solving the two constrained optimization problems: GIB and GPF.
Original language | English (US) |
---|---|
Pages (from-to) | 1846-1872 |
Number of pages | 27 |
Journal | Journal of the Franklin Institute |
Volume | 355 |
Issue number | 4 |
DOIs | |
State | Published - Mar 2018 |
All Science Journal Classification (ASJC) codes
- Control and Systems Engineering
- Signal Processing
- Computer Networks and Communications
- Applied Mathematics