Abstract
Big data, including applications with high security requirements, are often collected and stored on multiple heterogeneous devices, such as mobile devices, drones, and vehicles. Due to the limitations of communication costs and security requirements, it is of paramount importance to analyze information in a decentralized manner instead of aggregating data to a fusion center. To train large-scale machine learning models, edge/fog computing is often leveraged as an alternative to centralized learning. We consider the problem of learning model parameters in a multiagent system with data locally processed via distributed edge nodes. A class of minibatch stochastic alternating direction method of multipliers (ADMMs) algorithms is explored to develop the distributed learning model. To address two main critical challenges in distributed learning systems, i.e., communication bottleneck and straggler nodes (nodes with slow responses), error-control-coding-based stochastic incremental ADMM is investigated. Given an appropriate minibatch size, we show that the minibatch stochastic ADMM-based method converges in a rate of O(1/\sqrt {k}) , where k denotes the number of iterations. Through numerical experiments, it is revealed that the proposed algorithm is communication efficient, rapidly responding, and robust in the presence of straggler nodes compared with state-of-the-art algorithms.
Original language | English (US) |
---|---|
Article number | 9351538 |
Pages (from-to) | 5360-5373 |
Number of pages | 14 |
Journal | IEEE Internet of Things Journal |
Volume | 8 |
Issue number | 7 |
DOIs | |
State | Published - Apr 1 2021 |
Externally published | Yes |
All Science Journal Classification (ASJC) codes
- Signal Processing
- Information Systems
- Hardware and Architecture
- Computer Science Applications
- Computer Networks and Communications
Keywords
- Alternating direction method of multipliers (ADMMs)
- coded edge computing
- consensus optimization
- decentralized learning