A Study on Machine Unlearning
- Publication Type:
- Thesis
- Issue Date:
- 2024
Open Access
Copyright Clearance Process
- Recently Added
- In Progress
- Open Access
This item is open access.
Machine unlearning addresses the right to be forgotten by removing specific samples’ influence from trained ML models. This thesis tackles four key challenges: (1) preserving model utility while effectively forgetting data, (2) enhancing unlearning in federated settings, (3) accurately measuring unlearning effectiveness, and (4) safeguarding privacy throughout unlearning.
In centralized ML, approximate unlearning often causes “catastrophic unlearning.” We propose a two-objective optimization approach, balancing forgetting and remembering, with parameter self-sharing to maintain model performance. For federated scenarios, we introduce FedU, which supports partial sample unlearning without granting servers direct data access. To measure unlearning outcomes, we develop EMU, an evaluation method reliant on model changes rather than backdoor triggers, revealing how factors like sample similarity and task type affect unlearning. Finally, we propose a compressive representation forgetting method against privacy leakage on machine unlearning.
In summary, we investigate four fundamental problems in machine unlearning and propose corresponding solutions to tackle these challenges. Our research aims to fill existing gaps in machine unlearning research, providing robust approaches for implementing unlearning, evaluating unlearning, and protecting privacy throughout the unlearning process. We believe that the findings of this study contribute significantly to advancing the understanding and practical application of machine unlearning.
Please use this identifier to cite or link to this item:
