GAN Enhanced Membership Inference: A Passive Local Attack in Federated Learning
- Publisher:
- IEEE
- Publication Type:
- Conference Proceeding
- Citation:
- IEEE International Conference on Communications, 2020, 2020-June, pp. 1-6
- Issue Date:
- 2020-06-01
Closed Access
Filename | Description | Size | |||
---|---|---|---|---|---|
09148790.pdf | Published version | 467.78 kB |
Copyright Clearance Process
- Recently Added
- In Progress
- Closed Access
This item is closed access and not available.
Federated learning has lately received great attention for its privacy protection feature. However, recent researches found that federated learning models are susceptible to various inference attacks. In this paper, we point out a membership inference attack method that can cause a serious privacy leakage in federated learning. An adversary who is a participant in federated learning can train a classification attack model to launch the membership inference attack, which determines if a data record is in the model's training dataset. The existing membership inference method is dissatisfied due to a lack of attack data since the training data of each participant are independent. To overcome the lack of attack data, an adversary can enrich attack data using the generative adversarial network (GAN), which is a practical method to increase data diversity. We substantiate that this GAN enhanced membership inference attack method has a 98 attack accuracy. We perform experiments to show that data diversity and the overfitting make federated learning models susceptible.
Please use this identifier to cite or link to this item: