Contrastive Adaptation Network for Single- and Multi-Source Domain Adaptation.

Publisher:
Institute of Electrical and Electronics Engineers
Publication Type:
Journal Article
Citation:
IEEE Transactions on Pattern Analysis and Machine Intelligence, 2022, 44, (4), pp. 1793-1804
Issue Date:
2022-01-01
Full metadata record
Unsupervised Domain Adaptation (UDA) makes predictions for the target domain data while manual annotations are only available in the source domain. Previous methods minimize the domain discrepancy neglecting the class information, which may lead to misalignment and poor generalization performance. To tackle this issue, this paper proposes Contrastive Adaptation Network (CAN) that optimizes a new metric named Contrastive Domain Discrepancy explicitly modeling the intra-class domain discrepancy and the inter-class domain discrepancy. To optimize CAN, two technical issues need to be addressed: 1) the target labels are not available and 2) the conventional mini-batch sampling is imbalanced. Thus we design an alternating update strategy to optimize both the target label estimations and the feature representations. Moreover, we develop class-aware sampling to enable more efficient and effective training. Our framework can be generally applied to the single-source and multi-source domain adaptation scenarios. In particular, to deal with multiple source domain data, we propose 1) multi-source clustering ensemble which exploits the complementary knowledge of distinct source domains to make more accurate and robust target label estimations, and 2) boundary-sensitive alignment to make the decision boundary better fitted to the target. Experiments conducted on three real-world benchmarks, demonstrating CAN performs favorably against previous state-of-the-arts.
Please use this identifier to cite or link to this item: