Joint Learning of Body and Part Representation for Person Re-Identification

Publication Type:
Journal Article
Citation:
IEEE Access, 2018, 6 pp. 44199 - 44210
Issue Date:
2018-08-10
Metrics:
Full metadata record
Files in This Item:
Filename Description Size
08432403.pdfAccepted Manuscript Version2.63 MB
Adobe PDF
© 2013 IEEE. Person re-identification (ReID), aiming to identify people among multiple camera views, has attracted an increasing attention due to the potential of application in surveillance security. Large variations in subjects' postures, view angles, and illuminating conditions as well as non-ideal human detection significantly increase the difficulty of person ReID. Learning a robust metric for measuring the similarity between different person images is another under-addressed problem. In this paper, following the recent success of part-based models, in order to generate a discriminative and robust feature representation, we first propose to learn global and weighted local body-part features from pedestrian images. Then, in the training phase, angular loss and part-level classification loss are employed jointly as a similarity measure to train the network, which significantly improves the robustness of the resultant network against feature variance. Experimental results on several benchmark data sets demonstrate that our method outperforms the state-of-the-art methods.
Please use this identifier to cite or link to this item: