A deep learning framework for Hybrid Heterogeneous Transfer Learning

Publication Type:
Journal Article
Citation:
Artificial Intelligence, 2019, 275 pp. 310 - 328
Issue Date:
2019-10-01
Filename Description Size
1-s2.0-S0004370219301493-main.pdfPublished Version707.57 kB
Adobe PDF
Full metadata record
© 2019 Elsevier B.V. Most previous methods in heterogeneous transfer learning learn a cross-domain feature mapping between different domains based on some cross-domain instance-correspondences. Such instance-correspondences are assumed to be representative in the source domain and the target domain, respectively. However, in many real-world scenarios, this assumption may not hold. As a result, the constructed feature mapping may not be precise, and thus the transformed source-domain labeled data using the feature mapping are not useful to build an accurate classifier for the target domain. In this paper, we offer a new heterogeneous transfer learning framework named Hybrid Heterogeneous Transfer Learning (HHTL), which allows the selection of corresponding instances across domains to be biased to the source or target domain. Our basic idea is that though the corresponding instances are biased in the original feature space, there may exist other feature spaces, projected onto which, the corresponding instances may become unbiased or representative to the source domain and the target domain, respectively. With such a representation, a more precise feature mapping across heterogeneous feature spaces can be learned for knowledge transfer. We design several deep-learning-based architectures and algorithms that enable learning aligned representations. Extensive experiments on two multilingual classification datasets verify the effectiveness of our proposed HHTL framework and algorithms compared with some state-of-the-art methods.
Please use this identifier to cite or link to this item: