GSMI: A Gradient Sign Optimization Based Model Inversion Method

Publisher:
SPRINGER INTERNATIONAL PUBLISHING AG
Publication Type:
Conference Proceeding
Citation:
Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 2022, 13151 LNAI, pp. 67-78
Issue Date:
2022-01-01
Filename Description Size
GSMI A Gradient Sign Optimization Based Model Inversion Method.pdfPublished version80.56 MB
Adobe PDF
Full metadata record
The vulnerabilities of deep learning models on security and privacy have attracted a lot of attentions. Researchers have revealed the possibility of reconstructing training data of a target model. However, the performances of current works are highly rely on auxiliary datasets. In this paper, we investigate the model inversion problem under a strict restriction, where the adversary aims to reconstruct plausible samples of the target class without help of auxiliary information. To solve this challenge, we propose a Gradient Sign Model Inversion (GSMI) method based on the idea of adversarial examples generation. Specifically, we make three modifications on a popular adversarial examples generation method i-FGSM to generate plausible samples. 1) increasing the number of attack iterations and 2) superposing noises to reveal more obvious features learned by target model. 3) removing subtle noises to make reconstructed samples more plausible. However, we find samples generated by GSMI still contain noisy components. Furthermore, we adopt the idea of image adjacent regions to design a two-pass components selection algorithm to generate more reasonable sample of the target class. Through experiments, we find that the inversion samples of GSMI are close to real target class samples with some fluctuations on different classes. In addition, we also provide detail analysis for reasons of limitations on the optimization-based model inversion methods.
Please use this identifier to cite or link to this item: