Pruning graph neural networks by evaluating edge properties
- Publisher:
- ELSEVIER
- Publication Type:
- Journal Article
- Citation:
- Knowledge-Based Systems, 2022, 256
- Issue Date:
- 2022-11-28
Closed Access
Filename | Description | Size | |||
---|---|---|---|---|---|
Pruning graph neural networks by evaluating edge properties.pdf | Published version | 1.21 MB |
Copyright Clearance Process
- Recently Added
- In Progress
- Closed Access
This item is closed access and not available.
The emergence of larger and deeper graph neural networks (GNNs) makes their training and inference increasingly expensive. Existing GNN pruning methods simultaneously prune the graph adjacency matrix and the model weights on a pretrained neural network by directly leveraging the lottery-ticket hypothesis, but the benefits of such methods are mainly via weight pruning, and methods based on saliency metrics struggle to outperform random pruning when pruning only the graph adjacency matrix. This motivates us to use different scoring standards for graph edges and network weights during GNN pruning. Thus, rather than measuring the importance of graph edges based on saliency metrics, we formulate the performance of GNNs mathematically with respect to the properties of their edges, elucidating how the performance drop can be avoided by pruning negative edges and nonbridges. This leads to our simple but effective two-step method for GNN pruning, leveraging the saliency metrics for the network pruning while sparsifying the graph with preservation of the loss performance. Experimental results show the effectiveness and efficiency of the proposed method on both small-scale graph datasets (Cora, Citeseer, and PubMed) and a large-scale dataset (Ogbn-ArXiv), where our method saves up to 98% of floating-point operations per second (FLOPs) on the small graphs and 94% of FLOPs on the large one, with no significant drop in accuracy.
Please use this identifier to cite or link to this item: