TY - GEN
T1 - Distributed Sub-gradient Algorithms with Limited Communications
AU - Rini, Stefano
AU - Rao, Milind
AU - Goldsmith, Andrea
N1 - Publisher Copyright:
© 2019 IEEE.
PY - 2019/11
Y1 - 2019/11
N2 - We consider the distributed convex optimization scenario in which nodes in a network collectively find the minimum of a function utilizing only local communications and computations. Various sub-gradient algorithms have been developed for this optimization setting for the case in which the global function factorizes as the sum of local functions to be distributed to the nodes in network, including the distributed (i) online gradient descent, (ii) Nesterov gradient descent, and (iii) dual averaging algorithms. Generally speaking, these subgradient algorithms assume that, in each communication round, nodes can exchange messages of size comparable to that of the optimization variable. For many high-dimensional optimization problems, this communication requirement is beyond the capabilities of the communication network supporting the distributed optimization. For this reason, we propose a dimensionality reduction technique based on random projections to adapt these distributed optimization algorithms to networks with communication links of limited capacity. In particular, we analyze the performance of the proposed dimensionality reduction technique for the three algorithms above, i.e. (i)-(iii). Numerical simulations are presented that illustrate the performance of the proposed approach for these three algorithms.
AB - We consider the distributed convex optimization scenario in which nodes in a network collectively find the minimum of a function utilizing only local communications and computations. Various sub-gradient algorithms have been developed for this optimization setting for the case in which the global function factorizes as the sum of local functions to be distributed to the nodes in network, including the distributed (i) online gradient descent, (ii) Nesterov gradient descent, and (iii) dual averaging algorithms. Generally speaking, these subgradient algorithms assume that, in each communication round, nodes can exchange messages of size comparable to that of the optimization variable. For many high-dimensional optimization problems, this communication requirement is beyond the capabilities of the communication network supporting the distributed optimization. For this reason, we propose a dimensionality reduction technique based on random projections to adapt these distributed optimization algorithms to networks with communication links of limited capacity. In particular, we analyze the performance of the proposed dimensionality reduction technique for the three algorithms above, i.e. (i)-(iii). Numerical simulations are presented that illustrate the performance of the proposed approach for these three algorithms.
KW - Decentralized methods
KW - Distributed convex optimization
KW - Dual averaging
KW - Gradient descent methods
KW - Nesterov gradient acceleration
KW - Random projections.
UR - http://www.scopus.com/inward/record.url?scp=85083312154&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85083312154&partnerID=8YFLogxK
U2 - 10.1109/IEEECONF44664.2019.9048683
DO - 10.1109/IEEECONF44664.2019.9048683
M3 - Conference contribution
AN - SCOPUS:85083312154
T3 - Conference Record - Asilomar Conference on Signals, Systems and Computers
SP - 2171
EP - 2175
BT - Conference Record - 53rd Asilomar Conference on Circuits, Systems and Computers, ACSSC 2019
A2 - Matthews, Michael B.
PB - IEEE Computer Society
T2 - 53rd Asilomar Conference on Circuits, Systems and Computers, ACSSC 2019
Y2 - 3 November 2019 through 6 November 2019
ER -