Defending Poisoning Attacks in Federated Learning via Adversarial Training Method

Publisher:
Springer
Publication Type:
Conference Proceeding
Citation:
Frontiers in Cyber Security, 2020, 1286, pp. 83-94
Issue Date:
2020-01-01
Filename Description Size
Binder2.pdfPublished version905.67 kB
Adobe PDF
Full metadata record
Recently, federated learning has shown its significant advantages in protecting training data privacy by maintaining a joint model across multiple clients. However, its model security issues have not only been recently explored but shown that federated learning exhibits inherent vulnerabilities on the active attacks launched by malicious participants. Poisoning is one of the most powerful active attacks where an inside attacker can upload the crafted local model updates to further impact the global model performance. In this paper, we first illustrate how the poisoning attack works in the context of federated learning. Then, we correspondingly propose a defense method that mainly relies upon a well-researched adversarial training technique: pivotal training, which improves the robustness of the global model with poisoned local updates. The main contribution of this work is that the countermeasure method is simple and scalable since it does not require complex accuracy validations, while only changing the optimization objectives and loss functions. We finally demonstrate the effectiveness of our proposed mitigation mechanisms through extensive experiments.
Please use this identifier to cite or link to this item: