Tracking via robust multi-task multi-view joint sparse representation

Publication Type:
Conference Proceeding
Citation:
Proceedings of the IEEE International Conference on Computer Vision, 2013, pp. 649 - 656
Issue Date:
2013-01-01
Filename Description Size
Thumbnail2013001508OK.pdf2.08 MB
Adobe PDF
Full metadata record
Combining multiple observation views has proven beneficial for tracking. In this paper, we cast tracking as a novel multi-task multi-view sparse learning problem and exploit the cues from multiple views including various types of visual features, such as intensity, color, and edge, where each feature observation can be sparsely represented by a linear combination of atoms from an adaptive feature dictionary. The proposed method is integrated in a particle filter framework where every view in each particle is regarded as an individual task. We jointly consider the underlying relationship between tasks across different views and different particles, and tackle it in a unified robust multi-task formulation. In addition, to capture the frequently emerging outlier tasks, we decompose the representation matrix to two collaborative components which enable a more robust and accurate approximation. We show that the proposed formulation can be efficiently solved using the Accelerated Proximal Gradient method with a small number of closed-form updates. The presented tracker is implemented using four types of features and is tested on numerous benchmark video sequences. Both the qualitative and quantitative results demonstrate the superior performance of the proposed approach compared to several state-of-the-art trackers. © 2013 IEEE.
Please use this identifier to cite or link to this item: