Unsupervised Domain Adaptation with Background Shift Mitigating for Person Re-Identification

Publisher:
Springer
Publication Type:
Journal Article
Citation:
International Journal of Computer Vision, 2021, 129, (7), pp. 2244-2263
Issue Date:
2021-07-01
Filename Description Size
Huang2021_Article_UnsupervisedDomainAdaptationWi.pdf2.25 MB
Adobe PDF
Full metadata record
Unsupervised domain adaptation has been a popular approach for cross-domain person re-identification (re-ID). There are two solutions based on this approach. One solution is to build a model for data transformation across two different domains. Thus, the data in source domain can be transferred to target domain where re-ID model can be trained by rich source domain data. The other solution is to use target domain data plus corresponding virtual labels to train a re-ID model. Constrains in both solutions are very clear. The first solution heavily relies on the quality of data transformation model. Moreover, the final re-ID model is trained by source domain data but lacks knowledge of the target domain. The second solution in fact mixes target domain data with virtual labels and source domain data with true annotation information. But such a simple mixture does not well consider the raw information gap between data of two domains. This gap can be largely contributed by the background differences between domains. In this paper, a Suppression of Background Shift Generative Adversarial Network (SBSGAN) is proposed to mitigate the gaps of data between two domains. In order to tackle the constraints in the first solution mentioned above, this paper proposes a Densely Associated 2-Stream (DA-2S) network with an update strategy to best learn discriminative ID features from generated data that consider both human body information and also certain useful ID-related cues in the environment. The built re-ID model is further updated using target domain data with corresponding virtual labels. Extensive evaluations on three large benchmark datasets show the effectiveness of the proposed method.
Please use this identifier to cite or link to this item: