Federated Learning with Communication Delay in Edge Networks

Frank Po Chen Lin, Christopher G. Brinton, Nicolo Michelusi

Research output: Contribution to journalConference articlepeer-review

17 Scopus citations

Abstract

Federated learning has received significant attention as a potential solution for distributing machine learning (ML) model training through edge networks. This work addresses an important consideration of federated learning at the network edge: communication delays between the edge nodes and the aggregator. A technique called FedDelAvg (federated delayed averaging) is developed, which generalizes the standard federated averaging algorithm to incorporate a weighting between the current local model and the delayed global model received at each device during the synchronization step. Through theoretical analysis, an upper bound is derived on the global model loss achieved by FedDelAvg, which reveals a strong dependency of learning performance on the values of the weighting and learning rate. Experimental results on a popular ML task indicate significant improvements in terms of convergence speed when optimizing the weighting scheme to account for delays.

Original languageEnglish (US)
Article number9322592
JournalProceedings - IEEE Global Communications Conference, GLOBECOM
DOIs
StatePublished - 2020
Externally publishedYes
Event2020 IEEE Global Communications Conference, GLOBECOM 2020 - Virtual, Taipei, Taiwan, Province of China
Duration: Dec 7 2020Dec 11 2020

All Science Journal Classification (ASJC) codes

  • Artificial Intelligence
  • Computer Networks and Communications
  • Hardware and Architecture
  • Signal Processing

Keywords

  • Federated learning
  • convergence analysis
  • distributed machine learning
  • edge intelligence
  • edge-cloud computing

Fingerprint

Dive into the research topics of 'Federated Learning with Communication Delay in Edge Networks'. Together they form a unique fingerprint.

Cite this