Distributed Gradient Descent: Nonconvergence to Saddle Points and the Stable-Manifold Theorem

Brian Swenson, Ryan Murray, H. Vincent Poor, Soummya Kar

Research output: Chapter in Book/Report/Conference proceedingConference contribution

7 Scopus citations

Abstract

The paper studies continuous-time distributed gradient descent (DGD) and considers the problem of showing that in nonconvex optimization problems, DGD typically converges to local minima rather than saddle points. In centralized settings, the problem of demonstrating nonconvergence to saddle points is typically handled by way of the stable-manifold theorem from classical dynamical systems theory. However, the classical stable-manifold theorem is not applicable in the distributed setting. The paper develops an appropriate stable-manifold theorem for DGD. This shows that convergence to saddle points may only occur from a low-dimensional stable manifold. Under appropriate assumptions (e.g., coercivity), the result implies that DGD almost always converges to local minima.

Original languageEnglish (US)
Title of host publication2019 57th Annual Allerton Conference on Communication, Control, and Computing, Allerton 2019
PublisherInstitute of Electrical and Electronics Engineers Inc.
Pages595-601
Number of pages7
ISBN (Electronic)9781728131511
DOIs
StatePublished - Sep 2019
Event57th Annual Allerton Conference on Communication, Control, and Computing, Allerton 2019 - Monticello, United States
Duration: Sep 24 2019Sep 27 2019

Publication series

Name2019 57th Annual Allerton Conference on Communication, Control, and Computing, Allerton 2019

Conference

Conference57th Annual Allerton Conference on Communication, Control, and Computing, Allerton 2019
Country/TerritoryUnited States
CityMonticello
Period9/24/199/27/19

All Science Journal Classification (ASJC) codes

  • Artificial Intelligence
  • Computer Networks and Communications
  • Hardware and Architecture
  • Safety, Risk, Reliability and Quality
  • Control and Optimization

Keywords

  • Distributed optimization
  • gradient descent
  • multi-agent systems
  • nonconvex optimization
  • saddle points
  • stable-manifold theorem

Fingerprint

Dive into the research topics of 'Distributed Gradient Descent: Nonconvergence to Saddle Points and the Stable-Manifold Theorem'. Together they form a unique fingerprint.

Cite this