Robust Gait Recognition under Unconstrained Environments Using Hybrid Descriptions

Publication Type:
Conference Proceeding
Citation:
DICTA 2017 - 2017 International Conference on Digital Image Computing: Techniques and Applications, 2017, 2017-December pp. 1 - 7
Issue Date:
2017-12-19
Filename Description Size
DICTA2017_Lingxiang paper.pdfPublished version310.99 kB
Adobe PDF
Full metadata record
© 2017 IEEE. Gait is one of the key biometric features that has been widely applied for human identification. Appearance-based features and motion-based features are the two mainly used presentations in the gait recognition. However, appearance-based features are sensitive to the body shape changes and silhouette extraction from real-world images and videos also remains a challenge. As for motion features, due to the difficulty in extracting the underlying models from gait sequences, the localization of human joints lacks of high reliability and strong robustness. This paper proposes a new approach which utilizes Two-Point Gait (TPG) as the motion feature to remedy the deficiency of the appearance feature based on Gait Energy Image (GEI), in order to increase the robustness of gait recognition under the unconstrained environments with view changes and cloth changes. Another contribution of this paper is that this is the first time that TPG has been applied for view change and cloth change issues since it was proposed. The extensive experiments show that the proposed method is more invariant to the view change and cloth change, and can significantly improve the robustness of gait recognition.
Please use this identifier to cite or link to this item: