On Splitting Dataset: Boosting Locally Adaptive Regression Kernels for Car Localization

Publisher:
IEEE Press
Publication Type:
Conference Proceeding
Citation:
2012 12th International Conference on Control, Automation, Robotics & Vision, 2012, pp. 1154 - 1159
Issue Date:
2012-01
Full metadata record
Files in This Item:
Filename Description Size
Thumbnail2012000699OK.pdf Published version5.55 MB
Adobe PDF
In this paper, we study the impact of learning an Adaboost classifier with small sample set (i.e., with fewer training examples). In particular, we make use of car localization as an underlying application, because car localization can be widely used to various real world applications. In order to evaluate the performance of Adaboost learning with a few examples, we simply apply Adaboost learning to a recently proposed feature descriptor - Locally Adaptive Regression Kernel (LARK). As a type of state-of-the-art feature descriptor, LARK is robust against illumination changes and noises. More importantly, we use LARK because its spatial property is also favorable for our purpose (i.e., each patch in the LARK descriptor corresponds to one unique pixel in the original image). In addition to learning a detector from the entire training dataset, we also split the original training dataset into several sub-groups and then we train one detector for each sub-group. We compare those features associated using the detector of each sub-group with that of the detector learnt with the entire training dataset and propose improvements based on the comparison results. Our experimental results indicate that the Adaboost learning is only successful on a small dataset when those learnt features simultaneously satisfy two conditions that: 1. features are learnt from the Region of Interest (ROI), and 2. features are sufficiently far away from each other.
Please use this identifier to cite or link to this item: