View-Consistent Heterogeneous Network on Graphs With Few Labeled Nodes.

Publisher:
Institute of Electrical and Electronics Engineers
Publication Type:
Journal Article
Citation:
IEEE Transactions on Cybernetics, 2023, 53, (9), pp. 5523-5532
Issue Date:
2023-03-17
Filename Description Size
View-Consistent_Heterogeneous_Network_on_Graphs_With_Few_Labeled_Nodes.pdfPublished version1.29 MB
Adobe PDF
Full metadata record
Performing transductive learning on graphs with very few labeled data, that is, two or three samples for each category, is challenging due to the lack of supervision. In the existing work, self-supervised learning via a single view model is widely adopted to address the problem. However, recent observation shows multiview representations of an object share the same semantic information in high-level feature space. For each sample, we generate heterogeneous representations and use view-consistency loss to make their representations consistent with each other. Multiview representation also inspires to supervise the pseudolabels generation by the aid of mutual supervision between views. In this article, we thus propose a view-consistent heterogeneous network (VCHN) to learn better representations by aligning view-agnostic semantics. Specifically, VCHN is constructed by constraining the predictions between two views so that the view pairs can supervise each other. To make the best use of cross-view information, we further propose a novel training strategy to generate more reliable pseudolabels, which thus enhances predictions of the VCHN. Extensive experimental results on three benchmark datasets demonstrate that our method achieves superior performance over state-of-the-art methods under very low label rates.
Please use this identifier to cite or link to this item: