TY - GEN
T1 - Decentralized Event-Triggered Federated Learning with Heterogeneous Communication Thresholds
AU - Zehtabi, Shahryar
AU - Hosseinalipour, Seyyedali
AU - Brinton, Christopher G.
N1 - Publisher Copyright:
© 2022 IEEE.
PY - 2022
Y1 - 2022
N2 - A recent emphasis of distributed learning research has been on federated learning (FL), in which model training is conducted by the data-collecting devices. Existing research on FL has mostly focused on a star topology learning architecture with synchronized (time-triggered) model training rounds, where the local models of the devices are periodically aggregated by a centralized coordinating node. However, in many settings, such a coordinating node may not exist, motivating efforts to fully decentralize FL. In this work, we propose a novel methodology for distributed model aggregations via asynchronous, event-triggered consensus iterations over the network graph topology. We consider heterogeneous communication event thresholds at each device that weigh the change in local model parameters against the available local resources in deciding the benefit of aggregations at each iteration. Through theoretical analysis, we demonstrate that our methodology achieves asymptotic convergence to the globally optimal learning model under standard assumptions in distributed learning and graph consensus literature, and without restrictive connectivity requirements on the underlying topology. Subsequent numerical results demonstrate that our methodology obtains substantial improvements in communication requirements compared with FL baselines.
AB - A recent emphasis of distributed learning research has been on federated learning (FL), in which model training is conducted by the data-collecting devices. Existing research on FL has mostly focused on a star topology learning architecture with synchronized (time-triggered) model training rounds, where the local models of the devices are periodically aggregated by a centralized coordinating node. However, in many settings, such a coordinating node may not exist, motivating efforts to fully decentralize FL. In this work, we propose a novel methodology for distributed model aggregations via asynchronous, event-triggered consensus iterations over the network graph topology. We consider heterogeneous communication event thresholds at each device that weigh the change in local model parameters against the available local resources in deciding the benefit of aggregations at each iteration. Through theoretical analysis, we demonstrate that our methodology achieves asymptotic convergence to the globally optimal learning model under standard assumptions in distributed learning and graph consensus literature, and without restrictive connectivity requirements on the underlying topology. Subsequent numerical results demonstrate that our methodology obtains substantial improvements in communication requirements compared with FL baselines.
UR - http://www.scopus.com/inward/record.url?scp=85143715369&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85143715369&partnerID=8YFLogxK
U2 - 10.1109/CDC51059.2022.9993258
DO - 10.1109/CDC51059.2022.9993258
M3 - Conference contribution
AN - SCOPUS:85143715369
T3 - Proceedings of the IEEE Conference on Decision and Control
SP - 4680
EP - 4687
BT - 2022 IEEE 61st Conference on Decision and Control, CDC 2022
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 61st IEEE Conference on Decision and Control, CDC 2022
Y2 - 6 December 2022 through 9 December 2022
ER -