Negative Samples-enhanced Graph Convolutional Neural Networks

Publisher:
IEEE
Publication Type:
Conference Proceeding
Citation:
2021 IEEE International Conference on Intelligent Systems and Knowledge Engineering, ISKE 2021, 2022, 00, pp. 262-268
Issue Date:
2022-01-01
Filename Description Size
Negative_Samples-enhanced_Graph_Convolutional_Neural_Networks.pdfPublished version2.4 MB
Adobe PDF
Full metadata record
Graph neural networks (GNNs) have shown great success in graph representation learning by extracting high-level features from nodes and their topology. Many previous studies have used the message passing mechanism to fuse neighborhood information with an assumption that neighbors should share some similar behaviors and then have positive correlations. Therefore, the neighbor nodes can be considered as positive samples and most of existing methods can be viewed as using these positive samples to update new feature vectors. Along with these positive samples, there are also some hidden negative samples (non-neighborhood nodes) which also contains information for a certain node but has been ignored so far. In addition, existing studies on negative sampling for graphs only fuse the sampling results into the loss function for the model training. In this paper, we propose a Negative Samples-enhanced Graph Convolutional Neural Networks (NegGCNs), where the negatively sampled nodes are directly incorporated into the message passing mechanism and used to update new node feature vectors. We further investigated the effects of negative sampling rate and the number of negatively sampled nodes on the classification accuracy of graph nodes. Experiments on citation network datasets suggest that the proposed NegGCNs model can improve the accuracy of graph node classification with a specific negative rate compared to GCNs.
Please use this identifier to cite or link to this item: