BFU: Bayesian Federated Unlearning with Parameter Self-Sharing
- Publisher:
- ASSOC COMPUTING MACHINERY
- Publication Type:
- Conference Proceeding
- Citation:
- Proceedings of the ACM Conference on Computer and Communications Security, 2023, pp. 567-578
- Issue Date:
- 2023-07-10
Closed Access
Filename | Description | Size | |||
---|---|---|---|---|---|
Bayesian Federated Unlearning with Parameter Self-Sharing.pdf | Published version | 5.65 MB |
Copyright Clearance Process
- Recently Added
- In Progress
- Closed Access
This item is closed access and not available.
As the right to be forgotten has been legislated worldwide, many studies attempt to design machine unlearning mechanisms to enable data erasure from a trained model. Existing machine unlearning studies focus on centralized learning, where the server can access all users' data. However, in a popular scenario, federated learning (FL), the server cannot access users' training data. In this paper, we investigate the problem of machine unlearning in FL. We formalize a federated unlearning problem and propose a bayesian federated unlearning (BFU) approach to implement unlearning for a trained FL model without sharing raw data with the server. Specifically, we first introduce an unlearning rate in BFU to balance the trade-off between forgetting the erased data and remembering the original global model, making it adaptive to different unlearning tasks. Then, to mitigate accuracy degradation caused by unlearning, we propose BFU with parameter self-sharing (BFU-SS). BFU-SS considers data erasure and maintaining learning accuracy as two tasks and optimizes them together during unlearning. Extensive comparisons between our methods and the state-of-art federated unlearning method demonstrate the superiority of our proposed realizations.
Please use this identifier to cite or link to this item: