Describing video with attention-based bidirectional LSTM

Publication Type:
Journal Article
Citation:
IEEE Transactions on Cybernetics, 2019, 49 (7), pp. 2631 - 2641
Issue Date:
2019-07-01
Filename Description Size
Describing Video With Attention-Based Bidirectional LSTM.pdfPublished Version1.56 MB
Adobe PDF
Full metadata record
© 2013 IEEE. Video captioning has been attracting broad research attention in the multimedia community. However, most existing approaches heavily rely on static visual information or partially capture the local temporal knowledge (e.g., within 16 frames), thus hardly describing motions accurately from a global view. In this paper, we propose a novel video captioning framework, which integrates bidirectional long-short term memory (BiLSTM) and a soft attention mechanism to generate better global representations for videos as well as enhance the recognition of lasting motions in videos. To generate video captions, we exploit another long-short term memory as a decoder to fully explore global contextual information. The benefits of our proposed method are two fold: 1) the BiLSTM structure comprehensively preserves global temporal and visual information and 2) the soft attention mechanism enables a language decoder to recognize and focus on principle targets from the complex content. We verify the effectiveness of our proposed video captioning framework on two widely used benchmarks, that is, microsoft video description corpus and MSR-video to text, and the experimental results demonstrate the superiority of the proposed approach compared to several state-of-the-art methods.
Please use this identifier to cite or link to this item: