Unsupervised Person Re-identification via Cross-camera Similarity Exploration.

Institute of Electrical and Electronics Engineers (IEEE)
Publication Type:
Journal Article
IEEE transactions on image processing : a publication of the IEEE Signal Processing Society, 2020, 29, pp. 5481-5490
Issue Date:
Filename Description Size
09052758.pdfPublished version2.23 MB
Adobe PDF
Full metadata record
Most person re-identification (re-ID) approaches are based on supervised learning, which requires manually annotated data. However, it is not only resource-intensive to acquire identity annotation but also impractical for large-scale data. To relieve this problem, we propose a cross-camera unsupervised approach that makes use of unsupervised style-transferred images to jointly optimize a convolutional neural network (CNN) and the relationship among the individual samples for person re-ID. Our algorithm considers two fundamental facts in the re- ID task, i.e., variance across diverse cameras and similarity within the same identity. In this paper, we propose an iterative framework which overcomes the camera variance and achieves across-camera similarity exploration. Specifically, we apply an unsupervised style transfer model to generate style-transferred training images with different camera styles. Then we iteratively exploit the similarity within the same identity from both the original and the style-transferred data. We start with considering each training image as a different class to initialize the Convolutional Neural Network (CNN) model. Then we measure the similarity and gradually group similar samples into one class, which increases similarity within each identity. We also introduce a diversity regularization term in the clustering to balance the cluster distribution. The experimental results demonstrate that our algorithm is not only superior to state-of-the-art unsupervised re-ID approaches, but also performs favorably compared with other competing unsupervised domain adaptation methods (UDA) and semi-supervised learning methods.
Please use this identifier to cite or link to this item: