Dynamic concept composition for zero-example event detection

Publication Type:
Conference Proceeding
Citation:
30th AAAI Conference on Artificial Intelligence, AAAI 2016, 2016, pp. 3464 - 3470
Issue Date:
2016-01-01
Full metadata record
© Copyright 2016, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. In this paper, we focus on automatically detecting events in unconstrained videos without the use of any visual training exemplars. In principle, zero-shot learning makes it possible to train an event detection model based on the assumption that events (e.g. birthday party) can be described by multiple mid-level semantic concepts (e.g. "blowing candle", "birthday cake"). Towards this goal, we first pre-Train a bundle of concept classifiers using data from other sources. Then we evaluate the semantic correlation of each concept w.r.t. the event of interest and pick up the relevant concept classifiers, which are applied on all test videos to get multiple prediction score vectors. While most existing systems combine the predictions of the concept classifiers with fixed weights, we propose to learn the optimal weights of the concept classifiers for each testing video by exploring a set of online available videos with freeform text descriptions of their content. To validate the effectiveness of the proposed approach, we have conducted extensive experiments on the latest TRECVID MEDTest 2014, MEDTest 2013 and CCV dataset. The experimental results confirm the superiority of the proposed approach.
Please use this identifier to cite or link to this item: