TY - GEN
T1 - Differentially-Private Multi-Tier Federated Learning
AU - Chen, Evan
AU - Lin, Frank Po Chen
AU - Han, Dong Jun
AU - Brinton, Christopher G.
N1 - Publisher Copyright:
© 2025 IEEE.
PY - 2025
Y1 - 2025
N2 - While federated learning (FL) eliminates the transmission of raw data over a network, it is still vulnerable to privacy breaches from the communicated model parameters. In this work, we propose Multi-Tier Federated Learning with Multi-Tier Differential Privacy (M2 FDP), a DP-enhanced FL methodology for jointly optimizing privacy and performance in hierarchical networks. One of the key concepts of M2 FDP is to adapt DP noise injection according to different tiers of an established edge/fog hierarchy (e.g., edge devices, intermediate nodes, and other layers up to cloud servers) according to the trust models within particular subnetworks. We conduct a comprehensive analysis of the convergence behavior of M2 FDP, revealing conditions on parameter tuning under which the training process converges sublinearly to a finite stationarity gap that depends on the network hierarchy, trust model, and target privacy level. Subsequent numerical evaluations demonstrate that M2 FDP obtains substantial improvements in these metrics over baselines for different privacy budgets, and validate the impact of different system configurations.
AB - While federated learning (FL) eliminates the transmission of raw data over a network, it is still vulnerable to privacy breaches from the communicated model parameters. In this work, we propose Multi-Tier Federated Learning with Multi-Tier Differential Privacy (M2 FDP), a DP-enhanced FL methodology for jointly optimizing privacy and performance in hierarchical networks. One of the key concepts of M2 FDP is to adapt DP noise injection according to different tiers of an established edge/fog hierarchy (e.g., edge devices, intermediate nodes, and other layers up to cloud servers) according to the trust models within particular subnetworks. We conduct a comprehensive analysis of the convergence behavior of M2 FDP, revealing conditions on parameter tuning under which the training process converges sublinearly to a finite stationarity gap that depends on the network hierarchy, trust model, and target privacy level. Subsequent numerical evaluations demonstrate that M2 FDP obtains substantial improvements in these metrics over baselines for different privacy budgets, and validate the impact of different system configurations.
UR - https://www.scopus.com/pages/publications/105018457122
UR - https://www.scopus.com/inward/citedby.url?scp=105018457122&partnerID=8YFLogxK
U2 - 10.1109/ICC52391.2025.11161547
DO - 10.1109/ICC52391.2025.11161547
M3 - Conference contribution
AN - SCOPUS:105018457122
T3 - IEEE International Conference on Communications
SP - 5633
EP - 5639
BT - ICC 2025 - IEEE International Conference on Communications
A2 - Valenti, Matthew
A2 - Reed, David
A2 - Torres, Melissa
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 2025 IEEE International Conference on Communications, ICC 2025
Y2 - 8 June 2025 through 12 June 2025
ER -