Detecting and mitigating poisoning attacks in federated learning using generative adversarial networks
- Publisher:
- WILEY
- Publication Type:
- Journal Article
- Citation:
- Concurrency Computation, 2022, 34, (7)
- Issue Date:
- 2022-03-22
Closed Access
Filename | Description | Size | |||
---|---|---|---|---|---|
Concurrency and Computation - 2020 - Zhao - Detecting and mitigating poisoning attacks in federated learning using.pdf | Published version | 5.65 MB |
Copyright Clearance Process
- Recently Added
- In Progress
- Closed Access
This item is closed access and not available.
In the age of the Internet of Things (IoT), large numbers of sensors and edge devices are deployed in various application scenarios; Therefore, collaborative learning is widely used in IoT to implement crowd intelligence by inviting multiple participants to complete a training task. As a collaborative learning framework, federated learning is designed to preserve user data privacy, where participants jointly train a global model without uploading their private training data to a third party server. Nevertheless, federated learning is under the threat of poisoning attacks, where adversaries can upload malicious model updates to contaminate the global model. To detect and mitigate poisoning attacks in federated learning, we propose a poisoning defense mechanism, which uses generative adversarial networks to generate auditing data in the training procedure and removes adversaries by auditing their model accuracy. Experiments conducted on two well-known datasets, MNIST and Fashion-MNIST, suggest that federated learning is vulnerable to the poisoning attack, and the proposed defense method can detect and mitigate the poisoning attack.
Please use this identifier to cite or link to this item: