Empirical risk minimization for metric learning using privileged information

Publisher:
AAAI
Publication Type:
Conference Proceeding
Citation:
Proceedings of the Twenty-Fifth International Joint Conference on Artificial Intelligence, 2016, pp. 2266 - 2272
Issue Date:
2016-01-01
Full metadata record
Files in This Item:
Filename Description Size
323.pdfPublished version816.29 kB
Adobe PDF
Traditional metric learning methods usually make decisions based on a fixed threshold, which may result in a suboptimal metric when the inter-class and inner-class variations are complex. To address this issue, in this paper we propose an effective metric learning method by exploiting privileged information to relax the fixed threshold under the empirical risk minimization framework. Privileged information describes useful high-level semantic information that is only available during training. Our goal is to improve the performance by incorporating privileged information to design a locally adaptive decision function. We jointly learn two distance metrics by minimizing the empirical loss penalizing the difference between the distance in the original space and that in the privileged space. The distance in the privileged space functions as a locally adaptive decision threshold, which can guide the decision making like a teacher. We optimize the objective function using the Accelerated Proximal Gradient approach to obtain a global optimum solution. Experiment results show that by leveraging privileged information, our proposed method can achieve satisfactory performance.
Please use this identifier to cite or link to this item: