Distributed Gradient Flow: Nonsmoothness, Nonconvexity, and Saddle Point Evasion

Brian Swenson, Ryan Murray, H. Vincent Poor, Soummya Kar

Research output: Contribution to journalArticlepeer-review

6 Scopus citations

Abstract

The article considers distributed gradient flow (DGF) for multiagent nonconvex optimization. DGF is a continuous-time approximation of distributed gradient descent that is often easier to study than its discrete-time counterpart. The article has two main contributions. First, the article considers optimization of nonsmooth, nonconvex objective functions. It is shown that DGF converges to critical points in this setting. The article then considers the problem of avoiding saddle points. It is shown that if agents’ objective functions are assumed to be smooth and nonconvex, then DGF can only converge to a saddle point from a zero-measure set of initial conditions. To establish this result, the article proves a stable manifold theorem for DGF, which is a fundamental contribution of independent interest. In a companion article, analogous results are derived for discrete-time algorithms.

Original languageEnglish (US)
Pages (from-to)3949-3964
Number of pages16
JournalIEEE Transactions on Automatic Control
Volume67
Issue number8
DOIs
StatePublished - Aug 1 2022
Externally publishedYes

All Science Journal Classification (ASJC) codes

  • Electrical and Electronic Engineering
  • Control and Systems Engineering
  • Computer Science Applications

Keywords

  • Distributed optimization
  • gradient descent
  • gradient flow
  • nonconvex optimization
  • nonsmooth optimization
  • saddle point
  • stable manifold

Fingerprint

Dive into the research topics of 'Distributed Gradient Flow: Nonsmoothness, Nonconvexity, and Saddle Point Evasion'. Together they form a unique fingerprint.

Cite this