RGB-IR Cross-modality Person ReID Based on Teacher-student GAN Model

Publication Type:
Journal Article
Pattern Recognition Letters, 2021, 150, pp. 155-161
Issue Date:
Filename Description Size
1-s2.0-S0167865521002415-main.pdf1.16 MB
Adobe PDF
Full metadata record
RGB-Infrared (RGB-IR) person re-identification (ReID) is a technology where the system can automatically identify the same person appearing at different parts of a video when light is unavailable. The critical challenge of this task is the cross-modality gap of features under different modalities. To solve this challenge, we proposed a Teacher-Student GAN model (TS-GAN) to adopt different domains and guide the ReID backbone. (1) In order to get corresponding RGB-IR image pairs, the RGB-IR Generative Adversarial Network (GAN) was used to generate IR images. (2) To kick-start the training of identities, a ReID Teacher module was trained under IR modality person images, which is then used to guide its Student counterpart in training. (3) Likewise, to better adapt different domain features and enhance model ReID performance, three Teacher-Student loss functions were used. Unlike other GAN based models, the proposed model only needs the backbone module at the test stage, making it more efficient and resource-saving. To showcase our model’s capability, we did extensive experiments on the newly-released SYSU-MM01 and RegDB RGB-IR Re-ID benchmark and achieved superior performance to the state-of-the-art with 47.4% mAP and 69.4% mAP respectively.
Please use this identifier to cite or link to this item: