Source-Free Multi-Domain Adaptation with Generally Auxiliary Model Training

Publisher:
IEEE
Publication Type:
Conference Proceeding
Citation:
2022 International Joint Conference on Neural Networks (IJCNN), 2022, 2022-July, pp. 1-8
Issue Date:
2022-09-30
Filename Description Size
Source-Free_Multi-Domain_Adaptation_with_Generally_Auxiliary_Model_Training.pdfPublished version1.96 MB
Adobe PDF
Full metadata record
Unsupervised domain adaptation transfers gained knowledge from labeled source domain(s) to a similar unlabeled target domain by eliminating the domain shifts. Most existing domain adaptation methods require the access to source data to match the source and target distributions. However, data privacy concerns make it difficult or impossible to share source data, leading to failures in existing domain adaptation methods. Admittedly, a few previous studies deal with domain adaptation without source data, but they rarely pay heed to data free domain adaptation with multiple source domains containing richer knowledge. In this paper, we propose a new multi-source data-free domain adaptation method- generally auxiliary model training (GAM)- which fits the source models to the target domain under the supervision of pseudo target labels rather than matching data distributions. To collect high-quality initial pseudo target labels, our approach learns both specific and general source models to improve the generality of source models based on auxiliary learning. Going further, we introduce a class balanced coefficient of each category based on the number of samples to reduce the misclassification often caused by data imbalance. Experiments on real-world classification datasets show that the propsosed generally auxiliary training has a superiority over the baselines.
Please use this identifier to cite or link to this item: