Supervised Gaussian process latent variable model for dimensionality reduction

Publication Type:
Journal Article
Citation:
IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics, 2011, 41 (2), pp. 425 - 434
Issue Date:
2011-04-01
Filename Description Size
Thumbnail2011000246OK.pdf1.08 MB
Adobe PDF
Full metadata record
The Gaussian process latent variable model (GP-LVM) has been identified to be an effective probabilistic approach for dimensionality reduction because it can obtain a low-dimensional manifold of a data set in an unsupervised fashion. Consequently, the GP-LVM is insufficient for supervised learning tasks (e.g., classification and regression) because it ignores the class label information for dimensionality reduction. In this paper, a supervised GP-LVM is developed for supervised learning tasks, and the maximum a posteriori algorithm is introduced to estimate positions of all samples in the latent variable space. We present experimental evidences suggesting that the supervised GP-LVM is able to use the class label information effectively, and thus, it outperforms the GP-LVM and the discriminative extension of the GP-LVM consistently. The comparison with some supervised classification methods, such as Gaussian process classification and support vector machines, is also given to illustrate the advantage of the proposed method. © 2006 IEEE.
Please use this identifier to cite or link to this item: