Improving what cross-modal retrieval models learn through object-oriented inter- and intra-modal attention networks
- Publisher:
- ASSOC COMPUTING MACHINERY
- Publication Type:
- Conference Proceeding
- Citation:
- ICMR 2019 - Proceedings of the 2019 ACM International Conference on Multimedia Retrieval, 2019, pp. 244-252
- Issue Date:
- 2019-06-05
Closed Access
Filename | Description | Size | |||
---|---|---|---|---|---|
3323873.3325043.pdf | Published version | 6.19 MB |
Copyright Clearance Process
- Recently Added
- In Progress
- Closed Access
This item is closed access and not available.
Although significant progress has been made for cross-modal retrieval models in recent years, few have explored what those models truly learn and what makes one model superior to another. Start by training two state-of-the-art text-to-image retrieval models with adversarial text inputs, we investigate and quantify the importance of syntactic structure and lexical information in learning the joint visual-semantic embedding space for cross-modal retrieval. The results show that the retrieval power mainly comes from localizing and connecting the visual objects and their cross-modal counterparts, the textual phrases. Inspired by this observation, we propose a novel model which employs object-oriented encoders along with inter- and intra-modal attention networks to improve inter-modal dependencies for cross-modal retrieval. In addition, we develop a new multimodal structure-preserving objective which additionally emphasizes intra-modal hard negative examples to promote intra-modal discrepancies. Extensive experiments show that the proposed approach outperforms the existing best method by a large margin (16.4% and 6.7% relatively with Recall@1 in the text-toimage retrieval task on the Flickr30K dataset and the MS-COCO dataset respectively).
Please use this identifier to cite or link to this item: