Learning multi-view deep features for small object retrieval in surveillance scenarios

Publication Type:
Conference Proceeding
Citation:
MM 2015 - Proceedings of the 2015 ACM Multimedia Conference, 2015, pp. 859 - 862
Issue Date:
2015-10-13
Full metadata record
Files in This Item:
Filename Description Size
p859-guo.pdfPublished version992.67 kB
Adobe PDF
© 2015 ACM. With the explosive growth of surveillance videos, object re-trieval has become a significant task for security monitoring. However, visual objects in surveillance videos are usually of small size with complex light conditions, view changes and partial occlusions, which increases the dificulty level of eff-ciently retrieving objects of interest in a large-scale dataset. Although deep features have achieved promising results on object classification and retrieval and have been veriffed to contain rich semantic structure property, they lack of ade-quate color information, which is as crucial as structure in-formation for effective object representation. In this paper, we propose to leverage discriminative Convolutional Neural Network (CNN) to learn deep structure and color feature to form an Effcient multi-view object representation. Specifi-cally, we utilize CNN trained on ImageNet to abstract rich semantic structure information. Meanwhile, we propose a CNN model supervised by 11 color names to extract deep color features. Compared with traditional color descriptors, deep color features can capture the common color property across difierent illumination conditions. Then, the comple-mentary multi-view deep features are encoded into short bi-nary codes by Locality-Sensitive Hash (LSH) and fused to retrieve objects. Retrieval experiments are performed on a dataset of 100k objects extracted from multi-camera surveil-lance videos. Comparison results with several popular visual descriptors show the effectiveness of the proposed approach.
Please use this identifier to cite or link to this item: