Exploring Privacy Vulnerabilities in Deep Learning by Reconstructing Training Samples

Publication Type:
Thesis
Issue Date:
2024
Full metadata record
Deep learning models, while immensely powerful and capable of revolutionizing various fields, have a critical shortcoming: they are not inherently designed with privacy and security considerations. This oversight results in vulnerabilities that can affect the entire lifecycle of these models, from the initial training phase to their deployment and use in real-world applications. In recent years, a variety of sophisticated attacks have been developed specifically to exploit these vulnerabilities. During the training phase, adversaries can launch attacks such as poisoning attacks causing the model to make incorrect predictions or to operate in unintended ways. This thesis aims to study the knowledge acquired by existing deep learning models from the perspective of vulnerability analysis of these models. The goal is to enhance our understanding of deep learning models, with the ultimate aim of better leveraging deep learning methods. Specifically, this thesis is conducted from two perspectives: vulnerability analysis and deep learning applications. In our study of optimization-based model inversion, the this focuses on how to utilize the forward and backward propagation information of a model to reconstruct the category knowledge learned by the model. Specifically, we combine the adversarial examples generation methods to reconstruct the training data of the victim model. Through this research, we explore the privacy information of the training data contained in the model's forward and backward propagation. Building upon the insights garnered from vulnerability analysis, this thesis investigates semantic communication to explore privacy risks present in real deep learning applications in the future. In this thesis, we conducted some preliminary fundamental research to help us understand the challenges present in semantic communication. We examine how the knowledge acquired by deep learning models can be better utilized and scrutinized. This research lays the foundation for enhancing the privacy protection capabilities of semantic communication in the future. In summary, this thesis explores privacy vulnerabilities in deep learning models. It investigates model inversion attacks to understand the knowledge learned by these models and identify potential risks. The research reveals biases in the information learned by models and enhance our understanding of knowledge learned by deep learning models. Additionally, it introduces asynchronous multi-task semantic communication to enhance communication efficiency as preliminary foundational research. These findings highlight the importance of developing robust defense mechanisms and suggest avenues for future research to improve the privacy of deep learning applications.
Please use this identifier to cite or link to this item: