AdverseGen: A Practical Tool for Generating Adversarial Examples to Deep Neural Networks Using Black-Box Approaches

Publisher:
Springer International Publishing
Publication Type:
Chapter
Citation:
Artificial Intelligence XXXVIII, 2021, 13101 LNAI, pp. 313-326
Issue Date:
2021-01-01
Filename Description Size
Zhang2021_Chapter_AdverseGenAPracticalToolForGen.pdfPublished version1.34 MB
Adobe PDF
Full metadata record
Deep neural networks are fragile as they are easily fooled by inputs with deliberate perturbations, which are key concerns in image security issues. Given a trained neural network, we are always curious about whether the neural network has learned the concept that we’d like it to learn. We want to know whether there might be some vulnerabilities of the neural network that could be exploited by hackers. It could be useful if there is a tool that can be used by non-experts to test a trained neural network and try to find its vulnerabilities. In this paper, we introduce a tool named AdverseGen for generating adversarial examples to a trained deep neural network using the black-box approach, i.e., without using any information about the neural network architecture and its gradient information. Our tool provides customized adversarial attacks for both non-professional users and developers. It can be invoked by a graphical user interface or command line mode to launch adversarial attacks. Moreover, this tool supports different attack goals (targeted, non-targeted) and different distance metrics.
Please use this identifier to cite or link to this item: