Detecting and mitigating poisoning attacks in federated learning using generative adversarial networks

Publisher:
WILEY
Publication Type:
Journal Article
Citation:
Concurrency Computation, 2022, 34, (7)
Issue Date:
2022-03-22
Full metadata record
In the age of the Internet of Things (IoT), large numbers of sensors and edge devices are deployed in various application scenarios; Therefore, collaborative learning is widely used in IoT to implement crowd intelligence by inviting multiple participants to complete a training task. As a collaborative learning framework, federated learning is designed to preserve user data privacy, where participants jointly train a global model without uploading their private training data to a third party server. Nevertheless, federated learning is under the threat of poisoning attacks, where adversaries can upload malicious model updates to contaminate the global model. To detect and mitigate poisoning attacks in federated learning, we propose a poisoning defense mechanism, which uses generative adversarial networks to generate auditing data in the training procedure and removes adversaries by auditing their model accuracy. Experiments conducted on two well-known datasets, MNIST and Fashion-MNIST, suggest that federated learning is vulnerable to the poisoning attack, and the proposed defense method can detect and mitigate the poisoning attack.
Please use this identifier to cite or link to this item: