CLIP-Enhanced Unsupervised Domain Adaptation with Consistency Regularization
- Publisher:
- IEEE
- Publication Type:
- Conference Proceeding
- Citation:
- 2024 International Joint Conference on Neural Networks (IJCNN), 2024, 00, pp. 1-8
- Issue Date:
- 2024-09-09
Closed Access
Filename | Description | Size | |||
---|---|---|---|---|---|
CLIP-Enhanced_Unsupervised_Domain_Adaptation_with_Consistency_Regularization.pdf | Published version | 1.06 MB |
Copyright Clearance Process
- Recently Added
- In Progress
- Closed Access
This item is closed access and not available.
Unsupervised domain adaptation UDA employs labeled data from a source domain to train classifiers for an unlabeled target domain We utilize Contrastive Language Image Pre training CLIP models to exploit textual information in labels enabling simultaneous matching of textual and image features However adapting CLIP models for UDA tasks poses a significant challenge and necessitates further investigation To this end we introduce CLIP Enhanced Unsupervised Domain Adaptation with Consistency Regularization which employs consistency regularization for concurrent training of CLIP s prompts and image adapters Our approach particularly under consistency regularization incorporates data augmentation to enhance the model s generalization capability During training we maintain consistent pseudo labels for target domain data regardless of whether weak or strong augmentation techniques are applied This strategy improves our model s robustness in adapting to various domains Additionally the integration of domain specific prompts and image adapters in our model optimizes the learning of domain related textual and image features Experiments on real world datasets substantiate the effectiveness of our proposed method The outcomes illustrate its superior performance compared to existing techniques across multiple benchmarks
Please use this identifier to cite or link to this item: