Annotation efficient cross-modal retrieval with adversarial attentive alignment
- Publisher:
- ASSOC COMPUTING MACHINERY
- Publication Type:
- Conference Proceeding
- Citation:
- MM 2019 - Proceedings of the 27th ACM International Conference on Multimedia, 2019, pp. 1758-1767
- Issue Date:
- 2019-10-15
Closed Access
Filename | Description | Size | |||
---|---|---|---|---|---|
3343031.3350894.pdf | Published version | 14.24 MB |
Copyright Clearance Process
- Recently Added
- In Progress
- Closed Access
This item is closed access and not available.
Visual-semantic embeddings are central to many multimedia applications such as cross-modal retrieval between visual data and natural language descriptions. Conventionally, learning a joint embedding space relies on large parallel multimodal corpora. Since massive human annotation is expensive to obtain, there is a strong motivation in developing versatile algorithms to learn from large corpora with fewer annotations. In this paper, we propose a novel framework to leverage automatically extracted regional semantics from un-annotated images as additional weak supervision to learn visual-semantic embeddings. The proposed model employs adversarial attentive alignments to close the inherent heterogeneous gaps between annotated and un-annotated portions of visual and textual domains. To demonstrate its superiority, we conduct extensive experiments on sparsely annotated multimodal corpora. The experimental results show that the proposed model outperforms state-of-the-art visual-semantic embedding models by a significant margin for cross-modal retrieval tasks on the sparse Flickr30k and MS-COCO datasets. It is also worth noting that, despite using only 20% of the annotations, the proposed model can achieve competitive performance (Recall at 10 > 80.0% for 1K and > 70.0% for 5K text-to-image retrieval) compared to the benchmarks trained with the complete annotations.
Please use this identifier to cite or link to this item: