A novel complex network prediction method based on multi-granularity contrastive learning

Publisher:
Springer Nature
Publication Type:
Journal Article
Citation:
CCF Transactions on Pervasive Computing and Interaction, 2024, 6, (4), pp. 1-12
Issue Date:
2024-01-01
Filename Description Size
A novel complex network prediction method.pdfPublished version999.58 kB
Full metadata record
The rapid development of IoT, cloud computing, and big data has led to an exponential increase in data complexity, driving the widespread application of complex networks. In transportation networks, for example, accurately predicting vehicle behaviors and traffic flow is critical for optimizing intelligent transportation systems. However, traditional deep learning models often focus on a single spatial granularity, limiting their ability to fully capture the multi-granularity interactions within these networks, reducing prediction accuracy. Key challenges include managing the intricate spatiotemporal dependencies inherent in complex network predictions and effectively integrating multi-granularity information. To address these challenges, we propose a novel complex network prediction method based on spatial multi-granularity adaptive fusion and contrastive learning. Our approach captures spatial representations at three levels: micro (node-wise graph), meso (regional graph), and macro (global graph). These representations are dynamically fused through an adaptive strategy to enhance spatiotemporal modeling. Furthermore, we introduce a multi-granularity contrastive learning mechanism to explore both commonalities and distinctions across three levels, boosting the model’s robustness and generalization. By aligning and contrasting features at various granularities, our method captures both local and global dynamics effectively. Extensive experiments on three real-world traffic datasets demonstrate that our method consistently outperforms state-of-the-art models in prediction accuracy.
Please use this identifier to cite or link to this item: