Discovering Low-Rank Shared Concept Space for Adapting Text Mining Models

Publisher:
IEEE
Publication Type:
Journal Article
Citation:
IEEE Transactions On Pattern Analysis And Machine Intelligence, 2013, 35 (6), pp. 1284 - 1297
Issue Date:
2013-01
Full metadata record
Files in This Item:
Filename Description Size
2013003699OK.pdf1.75 MB
Adobe PDF
We propose a framework for adapting text mining models that discovers low-rank shared concept space. Our major characteristic of this concept space is that it explicitly minimizes the distribution gap between the source domain with sufficient labeled data and the target domain with only unlabeled data, while at the same time it minimizes the empirical loss on the labeled data in the source domain. Our method is capable of conducting the domain adaptation task both in the original feature space as well as in the transformed Reproducing Kernel Hilbert Space (RKHS) using kernel tricks. Theoretical analysis guarantees that the error of our adaptation model can be bounded with respect to the embedded distribution gap and the empirical loss in the source domain. We have conducted extensive experiments on two common text mining problems, namely, document classification and information extraction, to demonstrate the efficacy of our proposed framework.
Please use this identifier to cite or link to this item: