A unified model sharing framework for moving object detection

Publisher:
Elsevier
Publication Type:
Journal Article
Citation:
Signal Processing, 2016, 124 pp. 72 - 80
Issue Date:
2016
Full metadata record
Files in This Item:
Filename Description Size
1-s2.0-S0165168415003540-main (1).pdfPublished Version2.02 MB
Adobe PDF
Millions of surveillance cameras have been installed in public areas, producing vast amounts of video data every day. It is an urgent need to develop intelligent techniques to automatically detect and segment moving objects which have wide applications. Various approaches have been developed for moving object detection based on background modeling in the literature. Most of them focus on temporal information but partly or totally ignore spatial information, bringing about sensitivity to noise and background motion. In this paper, we propose a unified model sharing framework for moving object detection. To begin with, to exploit the spatial-temporal correlation across different pixels, we establish a many-to-one correspondence by model sharing between pixels, and a pixel is labeled into foreground or background by searching an optimal matched model in the neighborhood. Then a random sampling strategy is introduced for online update of the shared models. In this way, we can reduce the total number of models dramatically and match a proper model for each pixel accurately. Furthermore, existing approaches can be naturally embedded into the proposed sharing framework. Two popular approaches, statistical model and sample consensus model, are used to verify the effectiveness. Experiments and comparisons on ChangeDetection benchmark 2014 demonstrate the superiority of the model sharing solution.
Please use this identifier to cite or link to this item: