Adaptively attending to visual attributes and linguistic knowledge for captioning
- Publication Type:
- Conference Proceeding
- Citation:
- MM 2017 - Proceedings of the 2017 ACM Multimedia Conference, 2017, pp. 1345 - 1353
- Issue Date:
- 2017-10-23
Closed Access
Filename | Description | Size | |||
---|---|---|---|---|---|
p1345-bin.pdf | Published version | 1.78 MB |
Copyright Clearance Process
- Recently Added
- In Progress
- Closed Access
This item is closed access and not available.
© 2017 Association for Computing Machinery. Visual content description has been attracting broad research attention in multimedia community because it deeply uncovers intrinsic semantic facet of visual data. Most existing approaches formulate visual captioning as machine translation task (i.e., from vision to language) via a top-down paradigm with global attention, which ignores to distinguish visual and non-visual parts during word generation. In this work, we propose a novel adaptive attention strategy for visual captioning, which can selectively attend to salient visual content based on linguistic knowledge. Specifically, we design a key control unit, termed visual gate, to adaptively decide "when" and "what" the language generator attend to during the word generation process. We map all the preceding outputs of language generator into a latent space to derive the representation of sentence structures, which assists the "visual gate" to choose appropriate attention timing. Meanwhile, we employ a bottom-up workflow to learn a pool of semantic attributes for serving as the propositional attention resources. We evaluate the proposed approach on two commonly-used benchmarks, i.e., MSCOCO and MSVD. The experimental results demonstrate the superiority of our proposed approach compared to several state-of-the-art methods.
Please use this identifier to cite or link to this item: