Unsupervised Domain Adaptation with Sphere Retracting Transformation

Publisher:
IEEE
Publication Type:
Conference Proceeding
Citation:
Proceedings of the International Joint Conference on Neural Networks, 2019, 2019-July, pp. 1-8
Issue Date:
2019-07-01
Full metadata record
© 2019 IEEE. Unsupervised domain adaptation aims to leverage the knowledge in training data (source domain) to improve the performance of tasks in the remaining unlabeled data (target domain) by mitigating the effect of the distribution discrepancy. Existing approaches resolve this problem mainly by 1) mapping data into a latent space where the distribution discrepancy between two domains is reduced; or 2) reducing the domain shift by weighting the source domain. However, most of these approaches share a common issue that they neglect inter-class margins while matching distributions, which has a significant impact on classification performance. In this paper, we analyze the issue from the theoretical aspect and propose a novel unsupervised domain adaptation approach: Sphere Retracting Transformation (SRT), which reduces the distribution discrepancy and increases inter-class margins. We implement SRT, according to our theoretical analysis by (1) assigning class-specific weights for data in the source domain, and (2) minimizing the intra-class variations. Experiments confirm that the SRT approach outperforms several competitive approaches for standard domain adaptation benchmarks.
Please use this identifier to cite or link to this item: