Motivated by the increasingly powerful computing capabilities of end-user equipment, and by the growing privacy concerns over sharing sensitive raw data, a distributed machine learning paradigm known as federated learning (FL) has emerged. By training models locally at each client and aggregating learning models at a central server, FL has the capability to avoid sharing data directly, thereby reducing privacy leakage. However, the conventional FL framework relies heavily on a single central server, and it may fail if such a server behaves maliciously. To address this single point of failure, in this work, a blockchain-assisted decentralized FL framework is investigated, which can prevent malicious clients from poisoning the learning process, and thus provides a self-motivated and reliable learning environment for clients. In this framework, the model aggregation process is fully decentralized and the tasks of training for FL and mining for blockchain are integrated into each participant. Privacy and resource-allocation issues are further investigated in the proposed framework, and a critical and unique issue inherent in the proposed framework is disclosed. In particular, a lazy client can simply duplicate models shared by other clients to reap benefits without contributing its resources to FL. To address these issues, analytical and experimental results are provided to shed light on possible solutions, i.e., adding noise to achieve local differential privacy and using pseudo-noise (PN) sequences as watermarks to detect lazy clients.
All Science Journal Classification (ASJC) codes
- Theoretical Computer Science
- Artificial Intelligence