Grassmannian regularized structured multi-view embedding for image classification

Publication Type:
Journal Article
Citation:
IEEE Transactions on Image Processing, 2013, 22 (7), pp. 2646 - 2660
Issue Date:
2013-05-23
Filename Description Size
Thumbnail2012006219OK.pdf2.56 MB
Adobe PDF
Full metadata record
Images are usually represented by features from multiple views, e.g., color and texture. In image classification, the goal is to fuse all the multi-view features in a reasonable manner and achieve satisfactory classification performance. However, the features are often different in nature and it is nontrivial to fuse them. Particularly, some extracted features are redundant or noisy and are consequently not discriminative for classification. To alleviate these problems in an image classification context, we propose in this paper a novel multi-view embedding framework, termed as Grassmannian regularized structured multi-view embedding, or GrassReg for short. GrassReg transfers the graph Laplacian obtained from each view to a point on the Grassmann manifold and penalizes the disagreement between different views according to Grassmannian distance. Therefore, a view that is consistent with others is more important than a view that disagrees with others for learning a unified subspace for multi-view data representation. In addition, we impose the group sparsity penalty onto the low-dimensional embeddings obtained hence they can better explore the group structure of the intrinsic data distribution. Empirically, we compare GrassReg with representative multi-view algorithms and show the effectiveness of GrassReg on a number of multi-view image data sets. © 1992-2012 IEEE.
Please use this identifier to cite or link to this item: