Abstract
Federated learning has emerged as a popular technique for distributing model training across the network edge. Its learning architecture is conventionally a star topology be-tween the devices and a central server. In this paper, we propose two timescale hybrid federated learning (TT-Hf), which migrates to a more distributed topology via device-to-device (D2D) communications. In TT-HF, local model training occurs at devices via successive gradient iterations, and the synchronization process occurs at two timescales: (i) macro-scale, where global aggregations are carried out via device-server interactions, and (ii) micro-scale, where local aggregations are carried out via D2D cooperative consensus formation in different device clusters. Our theoretical analysis reveals how device, cluster, and network-level parameters affect the convergence of TT-HF, and leads to a set of conditions under which a convergence rate of O(1/t) is guaranteed. Experimental results demonstrate the improvements in convergence and utilization that can be obtained by TT-HF over state-of-the-art federated learning baselines.
Original language | English (US) |
---|---|
Journal | Proceedings - IEEE Global Communications Conference, GLOBECOM |
DOIs | |
State | Published - 2021 |
Externally published | Yes |
Event | 2021 IEEE Global Communications Conference, GLOBECOM 2021 - Madrid, Spain Duration: Dec 7 2021 → Dec 11 2021 |
All Science Journal Classification (ASJC) codes
- Artificial Intelligence
- Computer Networks and Communications
- Hardware and Architecture
- Signal Processing