Adversarial learning games with deep learning models

Publication Type:
Conference Proceeding
Citation:
Proceedings of the International Joint Conference on Neural Networks, 2017, 2017-May pp. 2758 - 2767
Issue Date:
2017-06-30
Metrics:
Full metadata record
Files in This Item:
Filename Description Size
2B0260EE-739B-41B2-B899-FCF0C9FF0533_am.pdfAccepted Manuscript1.6 MB
Adobe PDF
© 2017 IEEE. Deep learning has been found to be vulnerable to changes in the data distribution. This means that inputs that have an imperceptibly and immeasurably small difference from training data correspond to a completely different class label in deep learning. Thus an existing deep learning network like a Convolutional Neural Network (CNN) is vulnerable to adversarial examples. We design an adversarial learning algorithm for supervised learning in general and CNNs in particular. Adversarial examples are generated by a game theoretic formulation on the performance of deep learning. In the game, the interaction between an intelligent adversary and deep learning model is a two-person sequential noncooperative Stackelberg game with stochastic payoff functions. The Stackelberg game is solved by the Nash equilibrium which is a pair of strategies (learner weights and genetic operations) from which there is no incentive for either learner or adversary to deviate. The algorithm performance is evaluated under different strategy spaces on MNIST handwritten digits data. We show that the Nash equilibrium leads to solutions robust to subsequent adversarial data manipulations. Results suggest that game theory and stochastic optimization algorithms can be used to study performance vulnerabilities in deep learning models.
Please use this identifier to cite or link to this item: