TY - JOUR
T1 - Differentially Private ADMM for Regularized Consensus Optimization
AU - Cao, Xuanyu
AU - Zhang, Junshan
AU - Poor, H. Vincent
AU - Tian, Zhi
N1 - Funding Information:
Manuscript received June 29, 2019; revised December 17, 2019 and April 26, 2020; accepted September 5, 2020. Date of publication September 8, 2020; date of current version July 28, 2021. This work was supported by the Army Research Office under Grant W911NF-16-1-0448, and by NSF under Grant SaTC-1618768, Grant CPS-1739344, Grant 1704274, and Grant 1741338. Recommended by Associate Editor S. Grammatico. (Corresponding author: Xuanyu Cao.) Xuanyu Cao is with the Coordinated Science Lab, University of Illinois at Urbana-Champaign, Urbana, IL 61801 USA (e-mail: xyc@illinois.edu).
Publisher Copyright:
© 1963-2012 IEEE.
PY - 2021/8
Y1 - 2021/8
N2 - Due to its broad applicability in machine learning, resource allocation, and control, the alternating direction method of multipliers (ADMM) has been extensively studied in the literature. The message exchange of the ADMM in multiagent optimization may reveal sensitive information of agents, which can be overheard by malicious attackers. This drawback hinders the application of the ADMM to privacy-aware multiagent systems. In this article, we consider consensus optimization with regularization, in which the cost function of each agent contains private sensitive information, e.g., private data in machine learning, and private usage patterns in resource allocation. We develop a variant of the ADMM that can preserve agents' differential privacy by injecting noise into the public signals broadcast to the agents. We derive conditions on the magnitudes of the added noise under which the designated level of differential privacy can be achieved. Furthermore, the convergence properties of the proposed differentially private ADMM are analyzed under the assumption that the cost functions are strongly convex with Lipschitz continuous gradients, and the regularizer has smooth gradients or bounded subgradients. We find that to attain the best convergence performance given a certain privacy level, the magnitude of the injected noise should decrease as the algorithm progresses. Additionally, the choice of the number of iterations should balance the tradeoff between the convergence, and the privacy leakage of the ADMM, which is explicitly characterized by the derived upper bounds on convergence performance. Finally, numerical results are presented to corroborate the efficacy of the proposed algorithm. In particular, we apply the proposed algorithm to multiagent linear-quadratic control with private information to showcase its merit in control applications.
AB - Due to its broad applicability in machine learning, resource allocation, and control, the alternating direction method of multipliers (ADMM) has been extensively studied in the literature. The message exchange of the ADMM in multiagent optimization may reveal sensitive information of agents, which can be overheard by malicious attackers. This drawback hinders the application of the ADMM to privacy-aware multiagent systems. In this article, we consider consensus optimization with regularization, in which the cost function of each agent contains private sensitive information, e.g., private data in machine learning, and private usage patterns in resource allocation. We develop a variant of the ADMM that can preserve agents' differential privacy by injecting noise into the public signals broadcast to the agents. We derive conditions on the magnitudes of the added noise under which the designated level of differential privacy can be achieved. Furthermore, the convergence properties of the proposed differentially private ADMM are analyzed under the assumption that the cost functions are strongly convex with Lipschitz continuous gradients, and the regularizer has smooth gradients or bounded subgradients. We find that to attain the best convergence performance given a certain privacy level, the magnitude of the injected noise should decrease as the algorithm progresses. Additionally, the choice of the number of iterations should balance the tradeoff between the convergence, and the privacy leakage of the ADMM, which is explicitly characterized by the derived upper bounds on convergence performance. Finally, numerical results are presented to corroborate the efficacy of the proposed algorithm. In particular, we apply the proposed algorithm to multiagent linear-quadratic control with private information to showcase its merit in control applications.
KW - ADMM
KW - differential privacy
KW - distributed optimization
UR - http://www.scopus.com/inward/record.url?scp=85090945050&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85090945050&partnerID=8YFLogxK
U2 - 10.1109/TAC.2020.3022856
DO - 10.1109/TAC.2020.3022856
M3 - Article
AN - SCOPUS:85090945050
SN - 0018-9286
VL - 66
SP - 3718
EP - 3725
JO - IRE Transactions on Automatic Control
JF - IRE Transactions on Automatic Control
IS - 8
M1 - 9188006
ER -