Linearized ADMM Converges to Second-Order Stationary Points for Non-Convex Problems

Songtao Lu, Jason Lee, Meisam Razaviyayn, Mingyi Hong

Research output: Contribution to journalArticlepeer-review

Abstract

In this work, a gradient-based primal-dual method of multipliers is proposed for solving a class of linearly constrained non-convex problems. We show that with random initialization of the primal and dual variables, the algorithm is able to compute second-order stationary points (SOSPs) with probability one. Further, we present applications of the proposed method in popular signal processing and machine learning problems such as decentralized matrix factorization and decentralized training of overparameterized neural networks. One of the key steps in the analysis is to construct a new loss function for these problems such that the required convergence conditions (especially the gradient Lipschitz conditions) can be satisfied without changing the global optimal points.

Original languageEnglish (US)
Article number9503322
Pages (from-to)4859-4874
Number of pages16
JournalIEEE Transactions on Signal Processing
Volume69
DOIs
StatePublished - 2021

All Science Journal Classification (ASJC) codes

  • Signal Processing
  • Electrical and Electronic Engineering

Keywords

  • alternating direction method of multipliers (ADMM)
  • First-order stationary points (FOSPs)
  • neural networks
  • non-convex optimization
  • second-order stationary points (SOSPs)

Fingerprint

Dive into the research topics of 'Linearized ADMM Converges to Second-Order Stationary Points for Non-Convex Problems'. Together they form a unique fingerprint.

Cite this