Long-term person re-identification using true motion from videos

Publication Type:
Conference Proceeding
Citation:
Proceedings - 2018 IEEE Winter Conference on Applications of Computer Vision, WACV 2018, 2018, 2018-January pp. 494 - 502
Issue Date:
2018-05-03
Metrics:
Full metadata record
Files in This Item:
Filename Description Size
Zhang 2018 Long term person.pdfAccepted Manuscript version1.19 MB
Adobe PDF
© 2018 IEEE. Most person re-identification approaches and benchmarks assume that pedestrians go across the surveillance network without significant appearance changes in a brief period, which explicitly restricts person re-identification to a short-term event and incurs inter-sample similarity measurement by appearance matching. However, pedestrians are likely to reappear in the surveillance network after a long-time interval (long-term) and change their wearing in many real-world scenarios. These scenarios inevitably cause appearances between subjects more ambiguous and indistinguishable. In this paper we consider these scenarios and propose a unified feature representation based on true motion cues from videos named FIne moTion encoDing (FITD). Our hypothesis is that people keep constant motion patterns under non-distraction walking condition. Therefore, the motion characteristics are more reliable than static appearance feature to describe a walking person. Particularly, we extract motion patterns hierarchically by encoding trajectory-aligned descriptors with Fisher vectors in a spatial-aligned pyramid. To verify benefits of the proposed FITD, we collect a new dataset typically for the long-term situations. Extensive experiments demonstrate the merits of our FITD especially for the long-term scenarios.
Please use this identifier to cite or link to this item: