A study of privacy attacks and defences in deep learning
- Publication Type:
- Thesis
- Issue Date:
- 2024
Open Access
Copyright Clearance Process
- Recently Added
- In Progress
- Open Access
This item is open access.
Deep learning has emerged as a transformative technology, enabling major breakthroughs across domains like computer vision and natural language processing. However, its massive data requirements raise significant privacy concerns. Understanding and mitigating these privacy vulnerabilities is essential. This thesis investigates three key areas related to the privacy risks of deep learning models. First, it proposes a label-only membership inference attack framework targeting semantic segmentation models, demonstrating higher attack performance than prior work and discussing potential defenses. Second, it introduces a unified federated learning framework to address privacy preservation and personalization challenges simultaneously. This framework outperforms existing methods in protecting privacy against gradient inversion attacks while enabling personalization across heterogeneous client data. Third, the thesis explores how model architecture impacts privacy vulnerability by studying CNNs and Transformers. It identifies specific architectural designs that make models vulnerable to attacks like membership inference, attribute inference, and gradient inversion. The findings provide valuable insights into mitigating privacy risks in deep learning through architectural design principles, defenses against specific attacks, and new privacy-preserving training paradigms. In summary, this thesis offers an understanding of deep learning privacy vulnerabilities and potential solutions.
Please use this identifier to cite or link to this item:
