Abstract
With the growth of the magnitude of multiagent networks, distributed optimization holds considerable significance within complex systems. Convergence, a pivotal goal in this domain, is contingent upon the analysis of infinite products of stochastic matrices (IPSMs). In this work, the convergence properties of inhomogeneous IPSMs are investigated. The convergence rate of inhomogeneous IPSMs toward an absolute probability sequence π is derived. We also show that the convergence rate is nearly exponential, which coincides with existing results on ergodic chains. The methodology employed relies on delineating the interrelations among Sarymsakov matrices, scrambling matrices, and positive-column matrices. Based on the theoretical results on inhomogeneous IPSMs, we propose a decentralized projected subgradient method for time-varying multiagent systems with graph-related stretches in (sub)gradient descent directions. The convergence of the proposed method is established for convex objective functions and extended to nonconvex objectives that satisfy Polyak-Lojasiewicz (PL) conditions. To corroborate the theoretical findings, we conduct numerical simulations, aligning the outcomes with the established theoretical framework.
Original language | English (US) |
---|---|
Pages (from-to) | 8882-8895 |
Number of pages | 14 |
Journal | IEEE Transactions on Neural Networks and Learning Systems |
Volume | 36 |
Issue number | 5 |
DOIs | |
State | Published - 2025 |
Externally published | Yes |
All Science Journal Classification (ASJC) codes
- Software
- Computer Science Applications
- Computer Networks and Communications
- Artificial Intelligence
Keywords
- Distributed consensus
- distributed optimization
- multiagent systems
- nonconvex optimization