Handling Sparse and Noisy Labels in Deep Graph Learning

Publication Type:
Thesis
Issue Date:
2022
Full metadata record
Recently, graph neural networks (GNNs) have achieved remarkable success on node classification tasks, but they tend to suffer from the label sparsity and label noise problems. In light of these challenges, this thesis is developed with the focus on Active Learning, Pseudo-Labeling and Label-noise representation learning in the context of GNNs. In particular, to tackle the label sparsity problem, a semi-supervised adversarial active learning framework is proposed on attributed graphs. It designs an adversarial learning approach to select the most informative nodes for querying their labels, such that the constructed label set can maximize the model performance. In addition to that, an informative pseudo-labeling method is also proposed for learning GNNs with sparse labels. While performing pseudo-labeling, this method is designed to take the factors of informativeness, reliability and class imbalance into consideration, thereby deriving a pseudo label set leading to high performance. Finally, to combat the label noise problem, a unified robust training framework is proposed, which performs sample reweighting and label correction simultaneously based on a dedicated devised label aggregation method. Extensive experiments have been conducted to demonstrate the effectiveness of the proposed methods with superior improvements to state-of-the-art baselines over a variety of real-world datasets.
Please use this identifier to cite or link to this item: