Model Poisoning Defense on Federated Learning: A Validation Based Approach
- Publisher:
- Springer International Publishing
- Publication Type:
- Conference Proceeding
- Citation:
- Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 2020, 12570 LNCS, pp. 207-223
- Issue Date:
- 2020-01-01
Closed Access
Filename | Description | Size | |||
---|---|---|---|---|---|
Pages from 2020_Book_NetworkAndSystemSecurity-2.pdf | Published version | 1.78 MB |
Copyright Clearance Process
- Recently Added
- In Progress
- Closed Access
This item is closed access and not available.
© 2020, Springer Nature Switzerland AG. Federated learning is an improved distributed machine learning approach for privacy preservation. All clients collaboratively train the model using on-device data, and the centralized server only aggregates clients’ training results instead of collecting their data. However, there is a serious shortcoming for federated learning that the centralized server cannot detect the validity of clients’ training data and correctness of training results due to its limitation on monitoring clients’ training processes. Federated learning is vulnerable to some attacks when attackers maliciously manipulate training data or updates, such as model poisoning attacks. Attackers who execute model poisoning attacks can negatively affect the global models’ performance on a targeted class by manipulating the label of this class at one or more clients. Currently, there is a gap in the defense methods against model poisoning attacks in federated learning. To address the above shortcoming, we propose an effective defense method against model poisoning attack in federated learning in this paper. We validate each client’s local model with a validation set. The server will only receive updates from well-performing clients to protect against model poisoning attacks. We consider the following two cases: all clients have a very similar distribution of training data and all clients have a very different distribution of training data, and design our methods and experiments for both cases. The experimental results show that our defense method can significantly reduce the success rate of model poisoning attacks in both cases in a federated learning setting.
Please use this identifier to cite or link to this item: