Scalable completion of nonnegative matrices with the separable structure

Publication Type:
Conference Proceeding
Citation:
30th AAAI Conference on Artificial Intelligence, AAAI 2016, 2016, pp. 2279 - 2285
Issue Date:
2016-01-01
Full metadata record
Files in This Item:
Filename Description Size
scalabla.pdfPublished version681.42 kB
Adobe PDF
© Copyright 2016, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. Matrix completion is to recover missing/unobserved values of a data matrix from very limited observations. Due to widely potential applications, it has received growing interests in fields from machine learning, data mining, to collaborative filtering and computer vision. To ensure the successful recovery of missing values, most existing matrix completion algorithms utilise the low-rank assumption, i.e., the fully observed data matrix has a low rank, or equivalently the columns of the matrix can be linearly represented by a few numbers of basis vectors. Although such low-rank assumption applies generally in practice, real-world data can process much richer structural information. In this paper, we present a new model for matrix completion, motivated by the separability assumption of nonnegative matrices from the recent literature of matrix factorisations: there exists a set of columns of the matrix such that the resting columns can be represented by their convex combinations. Given the separability property, which holds reasonably for many applications, our model provides a more accurate matrix completion than the low-rank based algorithms. Further, we derives a scalable algorithm to solve our matrix completion model, which utilises a randomised method to select the basis columns under the separability assumption and a coordinate gradient based method to automatically deal with the structural constraints in optimisation. Compared to the state-of-the-art algorithms, the proposed matrix completion model achieves competitive results on both synthetic and real datasets.
Please use this identifier to cite or link to this item: