Two-Stream Multirate Recurrent Neural Network for Video-Based Pedestrian Reidentification

Publication Type:
Journal Article
Citation:
IEEE Transactions on Industrial Informatics, 2018, 14 (7), pp. 3179 - 3186
Issue Date:
2018-07-01
Filename Description Size
08089379.pdfPublished Version453.2 kB
Adobe PDF
Full metadata record
© 2005-2012 IEEE. Video-based pedestrian reidentification is an emerging task in video surveillance and is closely related to several real-world applications. Its goal is to match pedestrians across multiple nonoverlapping network cameras. Despite the recent effort, the performance of pedestrian reidentification needs further improvement. Hence, we propose a novel two-stream multirate recurrent neural network for video-based pedestrian reidentification with two inherent advantages: First, capturing the static spatial and temporal information; Second,Author: Figure II is not cited in the text. Please cite it at the appropriate place. dealing with motion speed variance. Given video sequences of pedestrians, we start with extracting spatial and motion features using two different deep neural networks. Then, we explore the feature correlation which results in a regularized fusion network integrating the two aforementioned networks. Considering that pedestrians, sometimes even the same pedestrian, move in different speeds across different camera views, we extend our approach by feeding the two networks into a multirate recurrent network to exploit the temporal correlations. Extensive experiments have been conducted on two real-world video-based pedestrian reidentification benchmarks: iLIDS-VID and PRID 2011 datasets. The experimental results confirm the efficacy of the proposed method. Our code will be released upon acceptance.
Please use this identifier to cite or link to this item: