Parameter-Efficient Person Re-Identification in the 3D Space.
- Publisher:
- Institute of Electrical and Electronics Engineers (IEEE)
- Publication Type:
- Journal Article
- Citation:
- IEEE Trans Neural Netw Learn Syst, 2022, PP, (99), pp. 1-14
- Issue Date:
- 2022-10-31
Closed Access
| Filename | Description | Size | |||
|---|---|---|---|---|---|
| Parameter-Efficient_Person_Re-Identification_in_the_3D_Space.pdf | Published version | 4.15 MB |
Copyright Clearance Process
- Recently Added
- In Progress
- Closed Access
This item is closed access and not available.
People live in a 3D world. However, existing works on person re-identification (re-id) mostly consider the semantic representation learning in a 2D space, intrinsically limiting the understanding of people. In this work, we address this limitation by exploring the prior knowledge of the 3D body structure. Specifically, we project 2D images to a 3D space and introduce a novel parameter-efficient omni-scale graph network (OG-Net) to learn the pedestrian representation directly from 3D point clouds. OG-Net effectively exploits the local information provided by sparse 3D points and takes advantage of the structure and appearance information in a coherent manner. With the help of 3D geometry information, we can learn a new type of deep re-id feature free from noisy variants, such as scale and viewpoint. To our knowledge, we are among the first attempts to conduct person re-id in the 3D space. We demonstrate through extensive experiments that the proposed method: (1) eases the matching difficulty in the traditional 2D space; 2) exploits the complementary information of 2D appearance and 3D structure; 3) achieves competitive results with limited parameters on four large-scale person re-id datasets; and 4) has good scalability to unseen datasets. Our code, models, and generated 3D human data are publicly available at https://github.com/layumi/person-reid-3d.
Please use this identifier to cite or link to this item:
