AB - © 2016 by the author(s). Domain adaptation arises in supervised learning when the training (source domain) and test (target domain) data have different distribution- s. Let X and Y denote the features and target, respectively, previous work on domain adaptation mainly considers the covariate shift situation where the distribution of the features P(X) changes across domains while the conditional distribution P(Y\X) stays the same. To reduce domain discrepancy, recent methods try to find invariant components T(X) that have similar P(T(X)) on different domains by explicitly minimizing a distribution discrepancy measure. However, it is not clear if P(Y\T(X)) in different domains is also similar when P(Y/X)changes. Furthermore, transferable components do not necessarily have to be invariant. If the change in some components is identifiable, we can make use of such components for prediction in the target domain. In this paper, we focus on the case where P{X ,Y) and P(Y') both change in a causal system in which Y is the cause for X. Under appropriate assumptions, we aim to extract conditional transferable components whose conditional distribution P(T{X)\Y) is invariant after proper location-scale (LS) transformations, and identify how P{Y) changes between domains simultaneously. We provide theoretical analysis and empirical evaluation on both synthetic and real-world data to show the effectiveness of our method.
AU - Gong, M
AU - Zhang, K
AU - Liu, T
AU - Tao, D
AU - Glymour, C
AU - Scholkopf, B
DA - 2016/01/01
EP - 4165
JO - 33rd International Conference on Machine Learning, ICML 2016
PY - 2016/01/01
SP - 4149
TI - Domain adaptation with conditional transferable components
VL - 6
Y1 - 2016/01/01
Y2 - 2022/09/25
ER -