Transfer active learning

Publication Type:
Conference Proceeding
Citation:
International Conference on Information and Knowledge Management, Proceedings, 2011, pp. 2169 - 2172
Issue Date:
2011-12-13
Filename Description Size
Thumbnail2011004558OK.pdf400.38 kB
Adobe PDF
Full metadata record
Active learning traditionally assumes that labeled and unlabeled samples are subject to the same distributions and the goal of an active learner is to label the most informative unlabeled samples. In reality, situations may exist that we may not have unlabeled samples from the same domain as the labeled samples (i.e. target domain), whereas samples from auxiliary domains might be available. Under such situations, an interesting question is whether an active learner can actively label samples from auxiliary domains to benefit the target domain. In this paper, we propose a transfer active learning method, namely Transfer Active SVM (TrAcSVM), which uses a limited number of target instances to iteratively discover and label informative auxiliary instances. TrAcSVM employs an extended sigmoid function as instance weight updating approach to adjust the models for prediction of (newly arrived) target data. Experimental results on real-world data sets demonstrate that TrAcSVM obtains better efficiency and prediction accuracy than its peers. © 2011 ACM.
Please use this identifier to cite or link to this item: