LoDen: Making Every Client in Federated Learning a Defender Against the Poisoning Membership Inference Attacks
- Publisher:
- ASSOC COMPUTING MACHINERY
- Publication Type:
- Conference Proceeding
- Citation:
- Proceedings of the ACM Conference on Computer and Communications Security, 2023, pp. 122-135
- Issue Date:
- 2023-07-10
Closed Access
Filename | Description | Size | |||
---|---|---|---|---|---|
23‘ AsiaCCS LoDen.pdf | Accepted version | 1.24 MB |
Copyright Clearance Process
- Recently Added
- In Progress
- Closed Access
This item is closed access and not available.
Federated learning (FL) is a widely used distributed machine learning framework. However, recent studies have shown its susceptibility to poisoning membership inference attacks (MIA). In MIA, adversaries maliciously manipulate the local updates on selected samples and share the gradients with the server (i.e., poisoning). Since honest clients perform gradient descent on samples locally, an adversary can distinguish whether the attacked sample is a training sample based on observation of the change of the sample's prediction. This type of attack exacerbates traditional passive MIA, yet the defense mechanisms remain largely unexplored. In this work, we first investigate the effectiveness of the existing server-side robust aggregation algorithms (AGRs), designed to counter general poisoning attacks, in defending against poisoning MIA. We find that they are largely insufficient in mitigating poisoning MIA, as it targets specific victim samples and has minimal impact on model performance, unlike general poisoning. Thus, we propose a new client-side defense mechanism, called LoDen, which leverages the clients' unique ability to detect any suspicious privacy attacks. We theoretically quantify the membership information leaked to the poisoning MIA and provide a bound for this leakage in LoDen. We perform an extensive experimental evaluation on four benchmark datasets against poisoning MIA, comparing LoDen with six state-of-the-art server-side AGRs. LoDen consistently achieves missing rate in detecting poisoning MIA across all settings, and reduces the poisoning MIA success rate to in most cases. The code of LoDen is available at https://github.com/UQ-Trust-Lab/LoDen.
Please use this identifier to cite or link to this item: