Learning Part-based Convolutional Features for Person Re-identification

Publisher:
Institute of Electrical and Electronics Engineers
Publication Type:
Journal Article
Citation:
IEEE Transactions on Pattern Analysis and Machine Intelligence, 2021, 43, (3), pp. 902-917
Issue Date:
2021
Filename Description Size
Learning_Part-based_Convolutional_Features_for_Person_Re-Identification.pdfPublished version3.62 MB
Adobe PDF
Full metadata record
Part-level features offer fine granularity for pedestrian image description. In this article, we generally aim to learn discriminative part-informed feature for person re-identification. Our contribution is two-fold. First, we introduce a general part-level feature learning method, named Part-based Convolutional Baseline (PCB). Given an image input, it outputs a convolutional descriptor consisting of several part-level features. PCB is general in that it is able to accommodate several part partitioning strategies, including pose estimation, human parsing and uniform part partitioning. In experiment, we show that the learned descriptor has a significantly higher discriminative ability than the global descriptor. Second, based on PCB, we propose refined part pooling (RPP), which allows the parts to be more precisely located. Our idea is that pixels within a well-located part should be similar to each other while being dissimilar with pixels from other parts. We call it within-part consistency . When a pixel-wise feature vector in a part is more similar to some other part, it is then an outlier, indicating inappropriate partitioning. RPP re-assigns these outliers to the parts they are closest to, resulting in refined parts with enhanced within-part consistency. RPP requires no part labels and is trained in a weakly supervised manner. Experiment confirms that RPP allows PCB to gain another round of performance boost. For instance, on the Market-1501 dataset, we achieve (77.4+4.2) percent mAP and (92.3+1.5) percent rank-1 accuracy, a competitive performance with the state of the art.
Please use this identifier to cite or link to this item: