Projective Ranking: A Transferable Evasion Attack Method on Graph Neural Networks
- Publisher:
- ACM
- Publication Type:
- Conference Proceeding
- Citation:
- International Conference on Information and Knowledge Management, Proceedings, 2021, pp. 3617-3621
- Issue Date:
- 2021-10-26
Closed Access
Filename | Description | Size | |||
---|---|---|---|---|---|
3459637.3482161.pdf | Published version | 1.12 MB |
Copyright Clearance Process
- Recently Added
- In Progress
- Closed Access
This item is closed access and not available.
Graph Neural Networks (GNNs) have emerged as a series of effective learning methods for graph-related tasks. However, GNNs are shown vulnerable to adversarial attacks, where attackers can fool GNNs into making wrong predictions on adversarial samples with well-designed perturbations. Specifically, we observe that the current evasion attacks suffer from two limitations: (1) the attack strategy based on the reinforcement learning method might not be transferable when the attack budget changes; (2) the greedy mechanism in the vanilla gradient-based method ignores the long-term benefits of each perturbation operation. In this paper, we propose a new attack method named projective ranking to overcome the above limitations. Our idea is to learn a powerful attack strategy considering the long-term benefits of perturbations, then adjust it as little as possible to generate adversarial samples under different budgets. We further employ mutual information to measure the long-term benefits of each perturbation and rank them accordingly, so the learned attack strategy has better attack performance. Our method dramatically reduces the adaptation cost of learning a new attack strategy by projecting the attack strategy when the attack budget changes. Our preliminary evaluation results in synthesized and real-world datasets demonstrate that our method owns powerful attack performance and effective transferability.
Please use this identifier to cite or link to this item: