Using Adversarial Noises to Protect Privacy in Deep Learning Era

Publication Type:
Conference Proceeding
Citation:
2018 IEEE Global Communications Conference, GLOBECOM 2018 - Proceedings, 2018
Issue Date:
2018-01-01
Filename Description Size
Using Adversarial Noises to Protect Privacy in Deep Learning Era.pdfPublished version1.65 MB
Adobe PDF
Full metadata record
© 2018 IEEE. The unprecedented accuracy of deep learning methods has earned themselves as the foundation of new AI-based services on the Internet. At the same time, it presents obvious privacy issues. The deep learning aided privacy attack can dig out sensitive personal information not only from the text but also from unstructured data such as images and videos. In this paper, we proposed a framework to protect image privacy against the deep learning tools. We also propose two new metrics to measure the image privacy. Moreover, we propose two different image privacy protection schemes based on the two metrics, utilizing the adversarial example idea. The performance of our schemes is validated by simulation on a large-scale dataset. Our study shows that we can protect the image privacy by adding a small amount of noise, while the added noise has a humanly imperceptible impact on the image quality.
Please use this identifier to cite or link to this item: