TY - GEN
T1 - Distributed Convex Optimization with Limited Communications
AU - Rao, Milind
AU - Rini, Stefano
AU - Goldsmith, Andrea
N1 - Publisher Copyright:
© 2019 IEEE.
PY - 2019/5
Y1 - 2019/5
N2 - In this paper, a distributed convex optimization algorithm, termed distributed coordinate dual averaging (DCDA) algorithm, is proposed. The DCDA algorithm addresses the scenario of a large distributed optimization problem with limited communication among nodes in the network. Currently known distributed subgradient descent methods, such as the distributed dual averaging or the distributed alternating direction method of multipliers, assume that nodes can exchange messages of large cardinality. Such an assumption on the network communication capabilities is not valid in many scenarios of practical relevance. To address this setting, we propose the DCDA algorithm as a distributed convex optimization algorithm in which the communication between nodes in each round is restricted to a fixed number of dimensions. We bound the rate of convergence under different communication protocols and network architectures for this algorithm. We also consider the extensions to the cases of imperfect gradient knowledge and when transmitted messages are corrupted by additive noise or are quantized. Numerical simulations demonstrating the performance of DCDA in these different settings are also provided.
AB - In this paper, a distributed convex optimization algorithm, termed distributed coordinate dual averaging (DCDA) algorithm, is proposed. The DCDA algorithm addresses the scenario of a large distributed optimization problem with limited communication among nodes in the network. Currently known distributed subgradient descent methods, such as the distributed dual averaging or the distributed alternating direction method of multipliers, assume that nodes can exchange messages of large cardinality. Such an assumption on the network communication capabilities is not valid in many scenarios of practical relevance. To address this setting, we propose the DCDA algorithm as a distributed convex optimization algorithm in which the communication between nodes in each round is restricted to a fixed number of dimensions. We bound the rate of convergence under different communication protocols and network architectures for this algorithm. We also consider the extensions to the cases of imperfect gradient knowledge and when transmitted messages are corrupted by additive noise or are quantized. Numerical simulations demonstrating the performance of DCDA in these different settings are also provided.
KW - Distributed optimization
KW - convex analysis
KW - subgradient descent methods
KW - wireless communications
UR - http://www.scopus.com/inward/record.url?scp=85068993958&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85068993958&partnerID=8YFLogxK
U2 - 10.1109/ICASSP.2019.8682453
DO - 10.1109/ICASSP.2019.8682453
M3 - Conference contribution
AN - SCOPUS:85068993958
T3 - ICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing - Proceedings
SP - 4604
EP - 4608
BT - 2019 IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP 2019 - Proceedings
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 44th IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP 2019
Y2 - 12 May 2019 through 17 May 2019
ER -