Constrained empirical risk minimization framework for distance metric learning
- Publication Type:
- Journal Article
- Citation:
- IEEE Transactions on Neural Networks and Learning Systems, 2012, 23 (8), pp. 1194 - 1205
- Issue Date:
- 2012-12-01
Closed Access
Filename | Description | Size | |||
---|---|---|---|---|---|
2012001394OK.pdf | 772.55 kB |
Copyright Clearance Process
- Recently Added
- In Progress
- Closed Access
This item is closed access and not available.
Distance metric learning (DML) has received increasing attention in recent years. In this paper, we propose a constrained empirical risk minimization framework for DML. This framework enriches the state-of-the-art studies on both theoretic and algorithmic aspects. Theoretically, we comprehensively analyze the generalization by bounding the sample and the approximation errors with respect to the best model. Algorithmically, we carefully derive an optimal gradient descent by using Nesterov's method, and provide two example algorithms that utilize the logarithmic loss and the smoothed hinge loss, respectively. We evaluate the new framework on data classification and image retrieval experiments. Results show that the new framework has competitive performance compared with the representative DML algorithms, including Xing's method, large margin nearest neighbor classifier, neighborhood component analysis, and regularized metric learning. © 2012 IEEE.
Please use this identifier to cite or link to this item: