Strategies for searching video content with text queries or video examples

Publisher:
ITE Transactions on Media Technology and Applications (MTA)
Publication Type:
Journal Article
Citation:
ITE Transactions on Media Technology and Applications, 2016, 4 (3), pp. 227 - 238
Issue Date:
2016-01-01
Full metadata record
Files in This Item:
Filename Description Size
4_227.pdfPublished Version3.22 MB
Adobe PDF
© 2016 by ITE Transactions on Media Technology and Applications (MTA). The large number of user-generated videos uploaded on to the Internet everyday has led to many commercial video search engines, which mainly rely on text metadata for search. However, metadata is often lacking for user-generated videos, thus these videos are unsearchable by current search engines. Therefore, content-based video retrieval (CBVR) tackles this metadata-scarcity problem by directly analyzing the visual and audio streams of each video. CBVR encompasses multiple research topics, including low-level feature design, feature fusion, semantic detector training and video search/reranking. We present novel strategies in these topics to enhance CBVR in both accuracy and speed under different query inputs, including pure textual queries and query by video examples. Our proposed strategies have been incorporated into our submission for the TRECVID 2014 Multimedia Event Detection evaluation, where our system outperformed other submissions in both text queries and video example queries, thus demonstrating the effectiveness of our proposed approaches.
Please use this identifier to cite or link to this item: