Tasks Integrated Networks: Joint Detection and Retrieval for Image Search.
- Publisher:
- IEEE COMPUTER SOC
- Publication Type:
- Journal Article
- Citation:
- IEEE transactions on pattern analysis and machine intelligence, 2022, PP, (1), pp. 456-473
- Issue Date:
- 2022-01-01
Closed Access
Filename | Description | Size | |||
---|---|---|---|---|---|
Tasks_Integrated_Networks_Joint_Detection_and_Retrieval_for_Image_Search.pdf | Published version | 2.48 MB |
Copyright Clearance Process
- Recently Added
- In Progress
- Closed Access
This item is closed access and not available.
The traditional object (person) retrieval (re-identification) task aims to learn a discriminative feature representation or metric on the cropped objects. However, in many real-world scenarios, the objects are seldom accurately annotated. Therefore, object-level retrieval becomes intractable without annotation, which leads to a new but challenging topic, i.e. image search with joint detection and retrieval. To address the image search issue, we introduce an end-to-end Integrated Net, which has four merits: 1) A Siamese architecture and an on-line pairing strategy for similar and dissimilar objects in the given images are designed. 2) A novel on-line pairing (OLP) loss is introduced with a dynamic feature dictionary, which alleviates the multi-task training stagnation problem, by automatically generating a number of negative pairs to restrict the positives. 3) Two modules are tailored to handle different tasks separately in the integrated framework, such that the task specification is guaranteed. 4) A class-center guided HEP loss (C2HEP) by exploiting the stored class centers is proposed, such that the intra-similarity and inter-dissimilarity can be captured. Extensive experiments on the CUHK-SYSU and PRW datasets for person search and the large-scale WebTattoo dataset for tattoo search, demonstrate that the proposed model outperforms the state-of-the-art image search models.
Please use this identifier to cite or link to this item: