Dynamic background learning through deep auto-encoder networks

Publication Type:
Conference Proceeding
Citation:
MM 2014 - Proceedings of the 2014 ACM Conference on Multimedia, 2014, pp. 107 - 116
Issue Date:
2014-01-01
Metrics:
Full metadata record
Files in This Item:
Filename Description Size
dynamic.pdfPublished version1.33 MB
Adobe PDF
Background learning is a pre-processing of motion detection which is a basis step of video analysis. For the static background, many previous works have already achieved good performance. However, the results on learning dynamic background are still much to be improved. To address this challenge, in this paper, a novel and practical method is proposed based on deep auto-encoder networks. Firstly, dynamic background images are extracted through a deep auto-encoder network (called Background Extraction Network) from video frames containing motion objects. Then, a dynamic background model is learned by another deep autoencoder network (called Background Learning Network) using the extracted background images as the input. To be more flexible, our background model can be updated on-line to absorb more training samples. Our main contributions are 1) a cascade of two deep auto-encoder networks which can deal with the separation of dynamic background and foregrounds very efficiently; 2) a method of online learning is adopted to accelerate the training of Background Extraction Network. Compared with previous algorithms, our approach obtains the best performance over six benchmark data sets. Especially, the experiments show that our algorithm can handle large variation background very well.
Please use this identifier to cite or link to this item: