Due to its broad applicability in machine learning, resource allocation, and control, the alternating direction method of multipliers (ADMM) has been extensively studied in the literature. The message exchange of the ADMM in multiagent optimization may reveal sensitive information of agents, which can be overheard by malicious attackers. This drawback hinders the application of the ADMM to privacy-aware multiagent systems. In this article, we consider consensus optimization with regularization, in which the cost function of each agent contains private sensitive information, e.g., private data in machine learning, and private usage patterns in resource allocation. We develop a variant of the ADMM that can preserve agents' differential privacy by injecting noise into the public signals broadcast to the agents. We derive conditions on the magnitudes of the added noise under which the designated level of differential privacy can be achieved. Furthermore, the convergence properties of the proposed differentially private ADMM are analyzed under the assumption that the cost functions are strongly convex with Lipschitz continuous gradients, and the regularizer has smooth gradients or bounded subgradients. We find that to attain the best convergence performance given a certain privacy level, the magnitude of the injected noise should decrease as the algorithm progresses. Additionally, the choice of the number of iterations should balance the tradeoff between the convergence, and the privacy leakage of the ADMM, which is explicitly characterized by the derived upper bounds on convergence performance. Finally, numerical results are presented to corroborate the efficacy of the proposed algorithm. In particular, we apply the proposed algorithm to multiagent linear-quadratic control with private information to showcase its merit in control applications.
All Science Journal Classification (ASJC) codes
- Control and Systems Engineering
- Computer Science Applications
- Electrical and Electronic Engineering
- differential privacy
- distributed optimization