Abstract
Motivated by the advancing computational capacity of distributed end-user equipment (UE), as well as the increasing concerns about sharing private data, there has been considerable recent interest in machine learning (ML) and artificial intelligence (AI) that can be processed on distributed UEs. Specifically, in this paradigm, parts of an ML process are outsourced to multiple distributed UEs. Then, the processed information is aggregated on a certain level at a central server, which turns a centralized ML process into a distributed one and brings about significant benefits. However, this new distributed ML paradigm raises new risks in terms of privacy and security issues. In this article, we provide a survey of the emerging security and privacy risks of distributed ML from a unique perspective of information exchange levels, which are defined according to the key steps of an ML process, i.e., we consider the following levels: 1) the level of preprocessed data; 2) the level of learning models; 3) the level of extracted knowledge; and 4) the level of intermediate results. We explore and analyze the potential of threats for each information exchange level based on an overview of current state-of-the-art attack mechanisms and then discuss the possible defense methods against such threats. Finally, we complete the survey by providing an outlook on the challenges and possible directions for future research in this critical area.
Original language | English (US) |
---|---|
Pages (from-to) | 1097-1132 |
Number of pages | 36 |
Journal | Proceedings of the IEEE |
Volume | 111 |
Issue number | 9 |
DOIs | |
State | Published - Sep 1 2023 |
Externally published | Yes |
All Science Journal Classification (ASJC) codes
- General Computer Science
- Electrical and Electronic Engineering
Keywords
- Distributed machine learning (ML)
- federated learning (FL)
- multiagent systems
- privacy
- security
- trusted artificial intelligence (AI)