Cross-view and multi-view gait recognitions based on view transformation model using multi-layer perceptron

Publication Type:
Journal Article
Pattern Recognition Letters, 2012, 33 (7), pp. 882 - 889
Issue Date:
Filename Description Size
Thumbnail2011001965OK.pdf812.62 kB
Adobe PDF
Full metadata record
Gait has been shown to be an efficient biometric feature for human identification at a distance. However, performance of gait recognition can be affected by view variation. This leads to a consequent difficulty of cross-view gait recognition. A novel method is proposed to solve the above difficulty by using view transformation model (VTM). VTM is constructed based on regression processes by adopting multi-layer perceptron (MLP) as a regression tool. VTM estimates gait feature from one view using a well selected region of interest (ROI) on gait feature from another view. Thus, trained VTMs can normalize gait features from across views into the same view before gait similarity is measured. Moreover, this paper proposes a new multi-view gait recognition which estimates gait feature on one view using selected gait features from several other views. Extensive experimental results demonstrate that the proposed method significantly outperforms other baseline methods in literature for both cross-view and multi-view gait recognitions. In our experiments, particularly, average accuracies of 99%, 98% and 93% are achieved for multiple views gait recognition by using 5 cameras, 4 cameras and 3 cameras respectively. © 2011 Elsevier B.V. All rights reserved.
Please use this identifier to cite or link to this item: