Learning a Distance Metric by Empirical Loss Minimization

Publisher:
AAAI Press/International Joint Conferences on Artificial Intelligence
Publication Type:
Conference Proceeding
Citation:
Proceedings of the Twenty-Second International Joint Conference on Artificial Intelligence, 2011, pp. 1186 - 1191
Issue Date:
2011-01
Full metadata record
Files in This Item:
Filename Description Size
2011001834OK.pdf1.47 MB
Adobe PDF
In this paper, we study the problem of learning a metric and propose a loss function based metric learning framework, in which the metric is estimated by minimizing an empirical risk over a training set. With mild conditions on the instance distribution and the used loss function, we prove that the empirical risk converges to its expected counterpart at rate of root-n. In addition, with the assumption that the best metric that minimizes the expected risk is bounded, we prove that the learned metric is consistent. Two example algorithms are presented by using the proposed loss function based metric learning framework, each of which uses a log loss function and a smoothed hinge loss function, respectively. Experimental results suggest the effectiveness of the proposed algorithms.
Please use this identifier to cite or link to this item: