Improving Consistency of Proxy-Level Contrastive Learning for Unsupervised Person Re-Identification

Publisher:
Institute of Electrical and Electronics Engineers (IEEE)
Publication Type:
Journal Article
Citation:
IEEE Transactions on Information Forensics and Security, 2024, 19, (99), pp. 6910-6922
Issue Date:
2024-01-01
Filename Description Size
1737720.pdfPublished version3.11 MB
Adobe PDF
Full metadata record
Recently, contrastive learning-based unsupervised person re-identification (Re-ID) methods have garnered significant attention due to their effectiveness. These methods rely on predicted pseudo-labels to construct contrastive pairs, optimizing the network gradually. Some methods also utilize camera labels to explore intra-camera and inter-camera contrastive relations, achieving state-of-the-art results. However, these methods fail to address the issue of inconsistency in proxy-level contrastive learning, which arises from variations in the distribution of instances belonging to the same proxy. Specifically, they are sensitive to the distribution of instances in a mini-batch used for contrastive pair construction, and uncertainty or noise in the data distribution can lead to turbulence in the contrastive loss, degrading the effectiveness of contrastive learning. In this work, we first propose a dual-branch contrastive learning (DBCL) framework. The framework comprises a dual-branch structure with an identity discrimination branch and a camera view awareness branch. These branches are mutually trained to produce a jointly optimized model with both high person identification accuracy and cross-camera robustness. Moreover, to mitigate the proxy-level contrastive inconsistency issue in the camera view awareness branch, we design intra-camera and inter-camera consistent contrastive losses. Our DBCL has been extensively evaluated on several person Re-ID datasets and has demonstrated superior performance compared to state-of-the-art methods. Notably, on the challenging MSMT17 dataset with complex scenes, our method achieved an mAP of 45.3% and Rank-1 accuracy of 75.3%. 1556-6021
Please use this identifier to cite or link to this item: