Data-Free Model-Related Attacks: Unleashing the Potential of Generative AI

Publication Type:
Conference Proceeding
Citation:
Proceedings of the 34th Usenix Security Symposium, 2025, pp. 1709-1727
Issue Date:
2025-01-01
Full metadata record
Generative AI technology has become increasingly integrated into our daily lives, offering powerful capabilities to enhance productivity. However, these same capabilities can be exploited by adversaries for malicious purposes. While existing research on adversarial applications of generative AI predominantly focuses on cyberattacks, less attention has been given to attacks targeting deep learning models. In this paper, we introduce the use of generative AI for facilitating model-related attacks, including model extraction, membership inference, and model inversion. Our study reveals that adversaries can launch a variety of model-related attacks against both image and text models in a data-free and black-box manner, achieving comparable performance to baseline methods that have access to the target models’ training data and parameters in a white-box manner. This research serves as an important early warning to the community about the potential risks associated with generative AI-powered attacks on deep learning models. The source code is provided at: https://zenodo.org/records/14737003.
Please use this identifier to cite or link to this item: