Social image annotation via cross-domain subspace learning

Publication Type:
Journal Article
Citation:
Multimedia Tools and Applications, 2012, 56 (1), pp. 91 - 108
Issue Date:
2012-01-01
Filename Description Size
10.1007%2Fs11042-010-0567-2.pdfPublished Version399 kB
Adobe PDF
Full metadata record
In recent years, cross-domain learning algorithms have attracted much attention to solve labeled data insufficient problem. However, these cross-domain learning algorithms cannot be applied for subspace learning, which plays a key role in multimedia processing. This paper envisions the cross-domain discriminative subspace learning and provides an effective solution to cross-domain subspace learning. In particular, we propose the cross-domain discriminative locally linear embedding or CDLLE for short. CDLLE connects the training and the testing samples by minimizing the quadratic distance between the distribution of the training samples and that of the testing samples. Therefore, a common subspace for data representation can be preserved. We basically expect the discriminative information to separate the concepts in the training set can be shared to separate the concepts in the testing set as well and thus we have a chance to address above cross-domain problem duly. The margin maximization is duly adopted in CDLLE so the discriminative information for separating different classes can be well preserved. Finally, CDLLE encodes the local geometry of each training samples through a series of linear coefficients which can reconstruct a given sample by its intra-class neighbour samples and thus can locally preserve the intra-class local geometry. Experimental evidence on NUS-WIDE, a popular social image database collected from Flickr, and MSRA-MM, a popular real-world web image annotation database collected from the Internet by using Microsoft Live Search, demonstrates the effectiveness of CDLLE for real-world cross-domain applications. © 2010 Springer Science+Business Media, LLC.
Please use this identifier to cite or link to this item: