Feature-Dependent Graph Convolutional Autoencoders with Adversarial Training Methods

Publisher:
IEEE
Publication Type:
Conference Proceeding
Citation:
The 2019 International Joint Conference on Neural Networks (IJCNN 2019), 2019, 2019-July, pp. 1-8
Issue Date:
2019
Filename Description Size
08852314.pdfPublished version752.03 kB
Adobe PDF
Full metadata record
© 2019 IEEE. Graphs are ubiquitous for describing and modeling complicated data structures, and graph embedding is an effective solution to learn a mapping from a graph to a low-dimensional vector space while preserving relevant graph characteristics. Most existing graph embedding approaches either embed the topological information and node features separately or learn one regularized embedding with both sources of information, however, they mostly overlook the interdependency between structural characteristics and node features when processing the graph data into the models. Moreover, existing methods only reconstruct the structural characteristics, which are unable to fully leverage the interaction between the topology and the features associated with its nodes during the encoding-decoding procedure. To address the problem, we propose a framework using autoencoder for graph embedding (GED) and its variational version (VEGD). The contribution of our work is two-fold: 1) the proposed frameworks exploit a feature-dependent graph matrix (FGM) to naturally merge the structural characteristics and node features according to their interdependency; and 2) the Graph Convolutional Network (GCN) decoder of the proposed framework reconstructs both structural characteristics and node features, which naturally possesses the interaction between these two sources of information while learning the embedding. We conducted the experiments on three real-world graph datasets such as Cora, Citeseer and PubMed to evaluate our framework and algorithms, and the results outperform baseline methods on both link prediction and graph clustering tasks.
Please use this identifier to cite or link to this item: