Multiview spectral embedding

Publication Type:
Journal Article
Citation:
IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics, 2010, 40 (6), pp. 1438 - 1446
Issue Date:
2010-12-01
Filename Description Size
Thumbnail2011000248OK.pdf863.99 kB
Adobe PDF
Full metadata record
In computer vision and multimedia search, it is common to use multiple features from different views to represent an object. For example, to well characterize a natural scene image, it is essential to find a set of visual features to represent its color, texture, and shape information and encode each feature into a vector. Therefore, we have a set of vectors in different spaces to represent the image. Conventional spectral-embedding algorithms cannot deal with such datum directly, so we have to concatenate these vectors together as a new vector. This concatenation is not physically meaningful because each feature has a specific statistical property. Therefore, we develop a new spectral-embedding algorithm, namely, multiview spectral embedding (MSE), which can encode different features in different ways, to achieve a physically meaningful embedding. In particular, MSE finds a low-dimensional embedding wherein the distribution of each view is sufficiently smooth, and MSE explores the complementary property of different views. Because there is no closed-form solution for MSE, we derive an alternating optimization-based iterative algorithm to obtain the low-dimensional embedding. Empirical evaluations based on the applications of image retrieval, video annotation, and document clustering demonstrate the effectiveness of the proposed approach. © 2010 IEEE.
Please use this identifier to cite or link to this item: