Differentially Private ADMM for Regularized Consensus Optimization

Xuanyu Cao, Junshan Zhang, H. Vincent Poor, Zhi Tian

Research output: Contribution to journalArticle

Abstract

Due to its broad applicability in machine learning, resource allocation and control, the alternating direction method of multipliers (ADMM) has been extensively studied in the literature. The message exchange of ADMM in multi-agent optimization may reveal sensitive information of agents, which can be overheard by malicious attackers. This drawback hinders the application of ADMM to privacy-aware multi-agent systems. In this paper, we consider consensus optimization with regularization, in which the cost function of each agent contains private sensitive information, e.g., private data in machine learning and private usage patterns in resource allocation. We develop a variant of ADMM that can preserve agents' differential privacy by injecting noise to the public signals broadcast to the agents. We derive conditions on the magnitudes of the added noise under which the designated level of differential privacy can be achieved. Further, the convergence properties of the proposed differentially private ADMM are analyzed {under the assumption that the cost functions are strongly convex with Lipschitz continuous gradients and the regularizer has smooth gradients or bounded subgradients}. We

Original languageEnglish (US)
JournalIEEE Transactions on Automatic Control
DOIs
StateAccepted/In press - 2020

All Science Journal Classification (ASJC) codes

  • Control and Systems Engineering
  • Computer Science Applications
  • Electrical and Electronic Engineering

Keywords

  • Convergence
  • Convex functions
  • Cost function
  • Machine learning
  • Privacy

Fingerprint Dive into the research topics of 'Differentially Private ADMM for Regularized Consensus Optimization'. Together they form a unique fingerprint.

  • Cite this