CA-GNN: A Competence-Aware Graph Neural Network for Semi-Supervised Learning on Streaming Data
- Publisher:
- Institute of Electrical and Electronics Engineers (IEEE)
- Publication Type:
- Journal Article
- Citation:
- IEEE Transactions on Cybernetics, 2024, PP, (99), pp. 1-14
- Issue Date:
- 2024-01-01
In Progress
Filename | Description | Size | |||
---|---|---|---|---|---|
1779818.pdf | Published version | 5.39 MB |
Copyright Clearance Process
- Recently Added
- In Progress
- Open Access
This item is being processed and is not currently available.
One challenge of learning from streaming data is that only a limited number of labeled examples are available, making semi-supervised learning (SSL) algorithms becoming an efficient tool for streaming data mining. Recently, the graph-based SSL algorithms have been proposed to improve SSL performance because the graph structure can utilize the interactivity between surrounding nodes. However, graph-based SSL algorithms have two main limitations when applied to streaming data. First, not all the labels of the data in the streaming data may be reliable, and direct classification using a graph can lead to suboptimal performance. Second, graph-based SSL algorithms assume the structure of the graph is static, but the learning environment of streaming data is dynamic. Hence, we propose a competence-aware graph neural network (CA-GNN) to deal with these two limitations. Unlike other models, CA-GNN does not directly rely on graph information that could include mislabeled nodes. Instead, a competence model is used to explore latent semantic correlations in the streaming data and capture the reliability for each data. A streaming learning strategy then evolves CA-GNN s parameters to capture the dynamism of the graph sequences. We conducted experiments using seven real datasets and four synthetic datasets, respectively, and compared the outcomes across various methods. The results demonstrate that CA-GNN classifies streaming data more effectively than the state-of-the-art (SOTA) methods.
Please use this identifier to cite or link to this item: