Large-margin weakly supervised dimensionality reduction

Publication Type:
Conference Proceeding
Citation:
31st International Conference on Machine Learning, ICML 2014, 2014, 3 pp. 2472 - 2482
Issue Date:
2014-01-01
Full metadata record
Copyright © (2014) by the International Machine Learning Society (IMLS) All rights reserved. This paper studies dimensionality reduction in a weakly supervised setting, in which the preference relationship between examples is indicated by weak cues. A novel framework is proposed that integrates two aspects of the large margin principle (angle and distance), which simultaneously encourage angle consistency between preference pairs and maximize the distance between examples in preference pairs. Two specific algorithms are developed: an alternating direction method to learn a linear transformation matrix and a gradient boosting technique to optimize a non-linear transformation directly in the function space. Theoretical analysis demonstrates that the proposed large margin optimization criteria can strengthen and improve the robustness and generalization performance of preference learning algorithms on the obtained low-dimensional subspace. Experimental results on real-world datasets demonstrate the significance of studying dimensionality reduction in the weakly supervised setting and the effectiveness of the proposed framework.
Please use this identifier to cite or link to this item: