Metric learning based structural appearance model for robust visual tracking

Publication Type:
Journal Article
Citation:
IEEE Transactions on Circuits and Systems for Video Technology, 2014, 24 (5), pp. 865 - 877
Issue Date:
2014-01-01
Metrics:
Full metadata record
Files in This Item:
Filename Description Size
Thumbnail2013006331.pdfPublished Version1.54 MB
Adobe PDF
Appearance modeling is a key issue for the success of a visual tracker. Sparse representation based appearance modeling has received an increasing amount of interest in recent years. However, most of existing work utilizes reconstruction errors to compute the observation likelihood under the generative framework, which may give poor performance, especially for significant appearance variations. In this paper, we advocate an approach to visual tracking that seeks an appropriate metric in the feature space of sparse codes and propose a metric learning based structural appearance model for more accurate matching of different appearances. This structural representation is acquired by performing multiscale max pooling on the weighted local sparse codes of image patches. An online multiple instance metric learning algorithm is proposed that learns a discriminative and adaptive metric, thereby better distinguishing the visual object of interest from the background. The multiple instance setting is able to alleviate the drift problem potentially caused by misaligned training examples. Tracking is then carried out within a Bayesian inference framework, in which the learned metric and the structure object representation are used to construct the observation model. Comprehensive experiments on challenging image sequences demonstrate qualitatively and quantitatively that the proposed algorithm outperforms the state-of-the-art methods. © 2013 IEEE.
Please use this identifier to cite or link to this item: