Conditional Matching GAN Guided Reconstruction Attack in Machine Unlearning
- Publisher:
- IEEE
- Publication Type:
- Conference Proceeding
- Citation:
- GLOBECOM 2023 - 2023 IEEE Global Communications Conference, 2024, 00, pp. 44-49
- Issue Date:
- 2024-02-26
Closed Access
Filename | Description | Size | |||
---|---|---|---|---|---|
1714727.pdf | Published version | 2.53 MB |
Copyright Clearance Process
- Recently Added
- In Progress
- Closed Access
This item is closed access and not available.
Machine unlearning allows data owners to erase certain data and its impact from learning models for the right to be forgotten However privacy risks during the unlearning process have been identified Earlier studies have used differences in model outputs before and after unlearning to conduct membership inference attacks Nevertheless the current attacks on machine unlearning are limited to inference and cannot reconstruct data without access to the victim s dataset In this paper we propose a reconstruction attack towards machine unlearning RAU which can reconstruct the unlearned data by exploiting the privacy leakage from the two models To improve reconstruction quality we propose a Conditional Matching Generative Adversarial Network CMGAN a novel variant of generative adversarial networks which introduces a reconstructive loss Our work demonstrates the possible privacy leakage of current machine unlearning scenarios Experimental results on MNIST and Fashion MNIST show that the proposed attack achieves high label recovery accuracy and good data recovery performance
Please use this identifier to cite or link to this item: