Hierarchical recurrent neural encoder for video representation with application to captioning

Publication Type:
Conference Proceeding
Citation:
Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2016, 2016-January pp. 1029 - 1038
Issue Date:
2016-01-01
Full metadata record
Files in This Item:
Filename Description Size
1511.03476v1.pdfAccepted Manuscript Version1.06 MB
Adobe PDF
Recently, deep learning approach, especially deep Convolutional Neural Networks (ConvNets), have achieved overwhelming accuracy with fast processing speed for image classification. Incorporating temporal structure with deep ConvNets for video representation becomes a fundamental problem for video content analysis. In this paper, we propose a new approach, namely Hierarchical Recurrent Neural Encoder (HRNE), to exploit temporal information of videos. Compared to recent video representation inference approaches, this paper makes the following three contributions. First, our HRNE is able to efficiently exploit video temporal structure in a longer range by reducing the length of input information flow, and compositing multiple consecutive inputs at a higher level. Second, computation operations are significantly lessened while attaining more nonlinearity. Third, HRNE is able to uncover temporal transitions between frame chunks with different granularities, i.e. it can model the temporal transitions between frames as well as the transitions between segments. We apply the new method to video captioning where temporal information plays a crucial role. Experiments demonstrate that our method outperforms the state-of-the-art on video captioning benchmarks.
Please use this identifier to cite or link to this item: