Abstract
In multi-cell wireless networks, providing intelligent services via federated learning (FL) becomes more challenging due to multi-level distribution shifts across clients and regions, as well as additional communication delays among edge and cloud servers. To address these issues, we propose SplitOMC, a split learning framework that integrates overlapping-area clients and a multi-exit neural architecture to jointly handle (i) client-preferred, (ii) out-of-preference, and (iii) out-of-region tasks. By strategically leveraging clients in overlapping regions, SplitOMC accelerates training without excessive backhaul communication, while maintaining both personalization and generalization. We theoretically analyze the convergence behavior of the proposed algorithm, ensuring performance stability under heterogeneous data and communication conditions. Extensive experiments on MNIST, CIFAR-10/100, and a real-world Jetson Nano testbed demonstrate that SplitOMC consistently achieves faster training and inference with improved accuracy compared to state-of-the-art methods. In particular, the framework shows robustness in resource-constrained and unstable network environments, highlighting its practical value for next-generation wireless intelligent services.
| Original language | English (US) |
|---|---|
| Journal | IEEE Transactions on Networking |
| DOIs | |
| State | Accepted/In press - 2026 |
All Science Journal Classification (ASJC) codes
- Computer Networks and Communications
- Computer Science Applications
- Software
- Electrical and Electronic Engineering
Keywords
- Distribution shift
- Federated learning
- Multi-cell networks
- Personalization
- Split learning