Low-bit quantization for attributed network representation learning

Publication Type:
Conference Proceeding
Citation:
IJCAI International Joint Conference on Artificial Intelligence, 2019, 2019-August pp. 4047 - 4053
Issue Date:
2019-01-01
Full metadata record
© 2019 International Joint Conferences on Artificial Intelligence. All rights reserved. Attributed network embedding plays an important role in transferring network data into compact vectors for effective network analysis. Existing attributed network embedding models are designed either in continuous Euclidean spaces which introduce data redundancy or in binary coding spaces which incur significant loss of representation accuracy. To this end, we present a new Low-Bit Quantization for Attributed Network Representation Learning model (LQANR for short) that can learn compact node representations with low bit-width values while preserving high representation accuracy. Specifically, we formulate a new representation learning function based on matrix factorization that can jointly learn the low-bit node representations and the layer aggregation weights under the low-bit quantization constraint. Because the new learning function falls into the category of mixed integer optimization, we propose an efficient mixed-integer based alternating direction method of multipliers (ADMM) algorithm as the solution. Experiments on real-world node classification and link prediction tasks validate the promising results of the proposed LQANR model.
Please use this identifier to cite or link to this item: