Abstract
To facilitate the deployment of machine learning in resource and privacy-constrained systems such as the Internet of Things, federated learning (FL) has been proposed as a means for enabling edge devices to train a shared learning model while promoting privacy. However, Google's seminal FL algorithm requires all devices to be directly connected with a central controller, which limits its applications. In contrast, this article introduces a novel FL framework, called collaborative FL (CFL), which enables edge devices to implement FL with less reliance on a central controller. The fundamentals of this framework are developed and a number of communication techniques are proposed so as to improve CFL performance. An overview of centralized learning, Google's FL, and CFL is presented. For each type of learning, the basic architecture as well as its advantages, drawbacks, and operating conditions are introduced. Then four CFL performance metrics are presented, and a suite of communication techniques ranging from network formation, device scheduling, mobility management, to coding are introduced to optimize the performance of CFL. For each technique, future research opportunities are discussed. In a nutshell, this article showcases how CFL can be effectively implemented at the edge of large-scale wireless systems.
Original language | English (US) |
---|---|
Article number | 9311931 |
Pages (from-to) | 48-54 |
Number of pages | 7 |
Journal | IEEE Communications Magazine |
Volume | 58 |
Issue number | 12 |
DOIs | |
State | Published - Dec 2020 |
All Science Journal Classification (ASJC) codes
- Computer Science Applications
- Computer Networks and Communications
- Electrical and Electronic Engineering