Where and How to Transfer: Knowledge Aggregation-Induced Transferability Perception for Unsupervised Domain Adaptation
- Publisher:
- Institute of Electrical and Electronics Engineers
- Publication Type:
- Journal Article
- Citation:
- IEEE Transactions on Pattern Analysis and Machine Intelligence, 2022, 14, (8), pp. 1-17
- Issue Date:
- 2022-08-07
Closed Access
| Filename | Description | Size | |||
|---|---|---|---|---|---|
| Where_and_How_to_Transfer_Knowledge_Aggregation-Induced_Transferability_Perception_for_Unsupervised_Domain_Adaptation.pdf | Published version | 11.51 MB |
Copyright Clearance Process
- Recently Added
- In Progress
- Closed Access
This item is closed access and not available.
Unsupervised domain adaptation without accessing expensive annotation processes of target data has achieved remarkable successes in semantic segmentation. However, most existing state-of-the-art methods cannot explore whether semantic representations across domains are transferable or not, which may result in the negative transfer brought by irrelevant knowledge. To tackle this challenge, in this paper, we develop a novel Knowledge Aggregation-induced Transferability Perception (KATP) module for
unsupervised domain adaptation, which is a pioneering attempt to distinguish transferable or untransferable knowledge across domains. Specifically, the KATP module is designed to quantify which semantic knowledge across domains is transferable, by
incorporating the transferability information propagation from constructed global category-wise prototypes. Based on KATP, we design a novel KATP Adaptation Network (KATPAN) to determine where and how to transfer. The KATPAN contains a transferable appearance translation module TA(·) and a transferable representation augmentation module TR(·), where both modules construct a virtuous circle of performance promotion. TA(·) develops a transferability-aware information bottleneck to highlight where to adapt transferable visual
characterizations and modality information; TR(·) explores how to augment transferable representations while abandoning untransferable information, and promotes the translation performance of TA(·) in return. Comprehensive experiments on several representative benchmark datasets and a medical dataset support the state-of-the-art performance of our model.
Please use this identifier to cite or link to this item:
