Spatiotemporal Edges for Arbitrarily Moving Video Classification in Protected and Sensitive Scenes

Publisher:
BON VIEW PUBLISHING PTE
Publication Type:
Journal Article
Citation:
Artificial Intelligence and Applications
Full metadata record
Classification of arbitrary moving objects including vehicles and human beings in a real environment (such as protected and sensitive areas) is challenging due to arbitrary deformation and directions caused by shaky camera and wind. This work aims at adopting a spatio-temporal approach for classifying arbitrarily moving objects. The intuition to propose the approach is that the behavior of the arbitrary moving objects caused by wind and shaky camera are inconsistent and unstable while for static objects, the behavior is consistent and stable. The proposed method segments foreground objects from background using the frame difference between median frame and individual frame. This step outputs several different foreground information. The method finds static and dynamic edges by subtracting Canny of foreground information from the Canny edges of respective input frames. The ratio of the number of static and dynamic edges of each frame is considered as features. The features are normalized to avoid the problems of imbalanced feature size and irrelevant features. For classification, the work uses 10-fold cross-validation to choose the number of training and testing samples and the random forest classifier is used for the final classification of frames with static objects and arbitrary movement objects. For evaluating the proposed method, we construct our own dataset, which contains video of static and arbitrarily moving objects caused by shaky camera and wind. The results on the video dataset show that the proposed method achieves the state-of-the-art performance (76% classification rate) which is 14% better than the best existing method.
Please use this identifier to cite or link to this item: