Gait Recognition via Effective Global-Local Feature Representation and Local Temporal Aggregation
- Publisher:
- IEEE
- Publication Type:
- Conference Proceeding
- Citation:
- 2021 IEEE/CVF International Conference on Computer Vision (ICCV), 2022, 00, pp. 14628-14636
- Issue Date:
- 2022-02-28
Recently Added
Filename | Description | Size | |||
---|---|---|---|---|---|
Binder6.pdf | Accepted version | 544.63 kB |
Copyright Clearance Process
- Recently Added
- In Progress
- Open Access
This item is new to OPUS and is not currently available.
Gait recognition is one of the most important biometric technologies and has been applied in many fields. Recent gait recognition frameworks represent each gait frame by descriptors extracted from either global appearances or local regions of humans. However, the representations based on global information often neglect the details of the gait frame, while local region based descriptors cannot capture the relations among neighboring regions, thus reducing their discriminativeness. In this paper, we propose a novel feature extraction and fusion framework to achieve discriminative feature representations for gait recognition. Towards this goal, we take advantage of both global visual information and local region details and develop a Global and Local Feature Extractor (GLFE). Specifically, our GLFE module is composed of our newly designed multiple global and local convolutional layers (GLConv) to ensemble global and local features in a principle manner. Furthermore, we present a novel operation, namely Local Temporal Aggregation (LTA), to further preserve the spatial information by reducing the temporal resolution to obtain higher spatial resolution. With the help of our GLFE and LTA, our method significantly improves the discriminativeness of our visual features, thus improving the gait recognition performance. Extensive experiments demonstrate that our proposed method outperforms state-of-the-art gait recognition methods on two popular datasets.
Please use this identifier to cite or link to this item: