Position-aware image captioning with spatial relation
- Publisher:
- ELSEVIER
- Publication Type:
- Journal Article
- Citation:
- Neurocomputing, 2022, 497, pp. 28-38
- Issue Date:
- 2022-08-01
Closed Access
Filename | Description | Size | |||
---|---|---|---|---|---|
Position-aware image captioning with spatial relation.pdf | Published version | 1.77 MB |
Copyright Clearance Process
- Recently Added
- In Progress
- Closed Access
This item is closed access and not available.
Image caption aims to generate a language description of a given image. The problem can be solved by learning semantic information of visual objects and generating descriptions based on extracted embedding. However, the spatial relationship between visual objects and their static position is not fully explored by existing methods. In this work, we propose a Position-Aware Transformer (PAT) model that extracts both regional and static global visual features and unify both the regional and global by incorporating spatial information aligned to each visual feature. To make a better representation of spatial information and correlation between extracted visual features, we propose and compare three subtle approaches to explore position embedding with spatial relation information explicitly. Moreover, we jointly consider the static global and regional embedding for spatial modeling. Experimental results illustrate that our proposed model achieves competitive performance on the COCO image captioning dataset, where the PAT model could respectively reach 38.7, 28.6, and 58.6 on BLEU-4, METEOR, and ROUGE-L respectively. Extensive experiments suggest that the proposed PAT model could also reach competitive performance on related visual-language tasks including visual question answering (VQA) and multi-modal retrieval. Detailed ablation studies are conducted to report how each part would contribute to the final performance, which could be a good reference for follow-up spatial information representation works.
Please use this identifier to cite or link to this item: