Robust Distracter-Resistive Tracker via Learning a Multi-Component Discriminative Dictionary

Publication Type:
Journal Article
Citation:
IEEE Transactions on Circuits and Systems for Video Technology, 2018
Issue Date:
2018-08-01
Metrics:
Full metadata record
Files in This Item:
Filename Description Size
08424191.pdfPublished Version4.4 MB
Adobe PDF
IEEE Discriminative dictionary learning (DDL) provides an appealing paradigm for appearance modeling in visual tracking. However, most existing DDL based trackers cannot handle drastic appearance changes, especially for scenarios with background cluster and/or similar object interference. One reason is that they often suffer from the loss of subtle visual information which is critical to distinguish an object from distracters. In this paper, we explore the use of deep features extracted from the Convolutional Neural Networks (CNNs) to improve the object representation and propose a robust distracter-resistive tracker via learning a multi-component discriminative dictionary. The proposed method exploits both the intra-class and the interclass visual information to learn shared atoms and the classspecific atoms. By imposing several constraints into the objective function, the learned dictionary is reconstructive, compressive and discriminative, thus can better distinguish an object from the background. In addition, our convolutional features (deep features extracted from CNNs) have structural information for object localization and balance the discriminative power and semantic information of the object. Tracking is carried out within a Bayesian inference framework where a joint decision measure is used to construct the observation model. To alleviate the drift problem, the reliable tracking results obtained online are accumulated to update the dictionary. Both the qualitative and quantitative results on the CVPR2013 benchmark, the VOT2015 dataset and the SPOT dataset demonstrate that our tracker achieves better performance over the state-of-the-art approaches.
Please use this identifier to cite or link to this item: