Adversarial Attacks and Defenses in Deep Learning: From a Perspective of Cybersecurity

Publisher:
Association for Computing Machinery (ACM)
Publication Type:
Journal Article
Citation:
ACM Computing Surveys, 2022, 55, (8), pp. 1-39
Issue Date:
2022-12-23
Filename Description Size
Adversarial Attacks and Defenses in Deep Learning.pdfPublished version1.1 MB
Adobe PDF
Full metadata record
The outstanding performance of deep neural networks has promoted deep learning applications in a broad set of domains. However, the potential risks caused by adversarial samples have hindered the large-scale deployment of deep learning. In these scenarios, adversarial perturbations, imperceptible to human eyes, significantly decrease the model's final performance. Many papers have been published on adversarial attacks and their countermeasures in the realm of deep learning. Most focus on evasion attacks, where the adversarial examples are found at test time, as opposed to poisoning attacks where poisoned data is inserted into the training data. Further, it is difficult to evaluate the real threat of adversarial attacks or the robustness of a deep learning model, as there are no standard evaluation methods. Hence, with this article, we review the literature to date. Additionally, we attempt to offer the first analysis framework for a systematic understanding of adversarial attacks. The framework is built from the perspective of cybersecurity to provide a lifecycle for adversarial attacks and defenses.
Please use this identifier to cite or link to this item: