Weak Supervision Network Embedding for Constrained Graph Learning

Publisher:
Springer
Publication Type:
Conference Proceeding
Citation:
Advances in Knowledge Discovery and Data Mining, 2021, 12712 LNAI, pp. 488-500
Issue Date:
2021-01-01
Filename Description Size
Guo2021_Chapter_WeakSupervisionNetworkEmbeddin.pdfPublished version2.32 MB
Adobe PDF
Full metadata record
Constrained learning, a weakly supervised learning task, aims to incorporate domain constraints to learn models without requiring labels for each instance. Because weak supervision knowledge is useful and easy to obtain, constrained learning outperforms unsupervised learning in performance and is preferable than supervised learning in terms of labeling costs. To date, constrained learning, especially constrained clustering, has been extensively studied, but was primarily focused on data in the Euclidean space. In this paper, we propose a weak supervision network embedding (WSNE) for constrained learning of graphs. Because no label is available for individual nodes, we propose a new loss function to quantify the constraint-based loss, and integrate this loss in a graph convolutional neural network (GCN) and variational graph auto-encoder (VGAE) combined framework to jointly model graph structures and node attributes. The joint optimization allows WSNE to learn embedding not only preserving network topology and content, but also satisfying the constraints. Experiments show that WSNE outperforms baselines for constrained graph learning tasks, including constrained graph clustering and constrained graph classification.
Please use this identifier to cite or link to this item: