A Novel Framework for the Analysis and Design of Heterogeneous Federated Learning

Jianyu Wang, Qinghua Liu, Hao Liang, Joshi Gauri, H. Vincent Poor

Research output: Contribution to journalArticlepeer-review

Abstract

In federated learning, heterogeneity in the clients' local datasets and computation speeds results in large variations in the number of local updates performed by each client in each communication round. Naive weighted aggregation of such models causes objective inconsistency, that is, the global model converges to a stationary point of a mismatched objective function which can be arbitrarily different from the true objective. This paper provides a general framework to analyze the convergence of federated optimization algorithms with heterogeneous local training progress at clients. The analyses are conducted for both smooth non-convex and strongly convex settings, and can also be extended to partial client participation case. Additionally, it subsumes previously proposed methods such as FedAvg and FedProx, and provides the first principled understanding of the solution bias and the convergence slowdown due to objective inconsistency. Using insights from this analysis, we propose FedNova, a normalized averaging method that eliminates objective inconsistency while preserving fast error convergence.

Original languageEnglish (US)
Pages (from-to)5234-5249
Number of pages16
JournalIEEE Transactions on Signal Processing
Volume69
DOIs
StatePublished - 2021

All Science Journal Classification (ASJC) codes

  • Signal Processing
  • Electrical and Electronic Engineering

Keywords

  • distributed optimization
  • Federated learning

Fingerprint

Dive into the research topics of 'A Novel Framework for the Analysis and Design of Heterogeneous Federated Learning'. Together they form a unique fingerprint.

Cite this