Privacy-Preserving and Byzantine-Robust Federated Learning
- Publisher:
- IEEE COMPUTER SOC
- Publication Type:
- Journal Article
- Citation:
- IEEE Transactions on Dependable and Secure Computing, 2024, 21, (2), pp. 889-904
- Issue Date:
- 2024
Closed Access
Filename | Description | Size | |||
---|---|---|---|---|---|
1636429.pdf | Published version | 4.74 MB |
Copyright Clearance Process
- Recently Added
- In Progress
- Closed Access
This item is closed access and not available.
Federated learning (FL) trains a model over multiple datasets by collecting the local models rather than raw data, which can help facilitate distributed data analysis in many real-world applications. Since the model parameters can leak information about the training datasets, it is necessary to preserve the privacy of the FL participants' local models. Furthermore, FL is vulnerable to poisoning attacks which can significantly decrease the model utility. To settle the above issues, we propose a privacy-preserving and Byzantine-robust FL scheme $\Pi _{\rm{P2Brofl}}$ that maintains robustness in the presence of poisoning attacks and preserves the privacy of local models simultaneously. Specifically, $\Pi _{\rm{P2Brofl}}$ leverages three-party computation (3PC) to securely achieve a Byzantine-robust aggregation method. To improve the efficiency of privacy-preserving local model selection and aggregation, we propose a maliciously secure top-$k$ protocol $\Pi _{\rm{top}-k}$ that has low communication overhead. Moreover, we present an efficient maliciously secure shuffling protocol $\Pi _{\rm{shuffle}}$ since secure shuffling is necessary for our secure top-$k$ protocol. The security proof of the scheme is given and experiments on real-world datasets are conducted in this paper. When the proportion of Byzantine participants is 50%, the error rate of the model only increases by 1.05% while it increases by 23.78% without using our protection.
Please use this identifier to cite or link to this item: