A multiview learning framework with a linear computational cost

Publication Type:
Journal Article
Citation:
IEEE Transactions on Cybernetics, 2018, 48 (8), pp. 2416 - 2425
Issue Date:
2018-08-01
Metrics:
Full metadata record
Files in This Item:
Filename Description Size
08014480.pdfPublished Version808.31 kB
Adobe PDF
© 2013 IEEE. Learning features from multiple views has attracted much research attention in different machine learning tasks, such as multiclass and multilabel classification problems. In this paper, we propose a multiclass multilabel multiview learning framework with a linear computational cost where an example is associated with at least one label and represented by multiple information sources. We simultaneously analyze all features by learning an integrated projection matrix. We can also automatically select more important views for subsequent classifier to predict each class. As the proposed objective function is nonsmooth and difficult to solve, we apply a novel optimization method that converts the multiview learning problem to a set of linear single-view learning problems by bridging our problem to an easily solvable approach. Compared to the conventional methods which learn the entire projection matrix, our algorithm independently optimizes each column of the projection matrix for each class, which can be easily parallelized. In each column optimization, the most computationally intensive step is pure and simple matrix-by-vector multiplication. As a result, our algorithm is much more applicable to large-scale problems than the multiview learning methods with a nonlinear computational cost. Moreover, rigorous convergence proof of the proposed algorithm is also provided. To evaluate the effectiveness of the proposed approach, experimental comparisons are made with state-of-the-art algorithms in multiclass and multilabel classification tasks on many multiview benchmarks. We also report the efficiency comparison results on different numbers of data samples. The experimental results demonstrate that our algorithm can achieve superior performance to all the compared algorithms.
Please use this identifier to cite or link to this item: