A precise human detection model using combination of feature extraction techniques in a dynamic environment

Publication Type:
Conference Proceeding
Citation:
International Conference Image and Vision Computing New Zealand, 2018, 2017-December pp. 1 - 6
Issue Date:
2018-07-03
Full metadata record
© 2017 IEEE. This research paper presents a Machine Learning-based human detection model focusing on improving precision of human movement conditions in video frames. The problem is addressed by focusing on pre-processing and an efficient feature extraction methodology. Combination of features are extracted, including histograms of gradients (HoG), histograms of colors (HoC), and histograms of bars (HoB). These featuresets are combined to form the finall feature vector that describes the human shape, and the Support Vector Machine (SVM)-based classifier is used for classification purposes. Improving the precision will allow the human movement detector to make better detections by reducing false positives and missed detections, which are the problems faced by current detection techniques. Training of the algorithm is done using the INRIA dataset and tested on sequences depicting conditions of moving humans in different environments. In the testing phase, the search space is reduced using an upper body detector, which is done using haar features. The reduced space is used to carry out human detection using the proposed feature extraction technique. The proposed detector approach performs well, and the number of missed detections are reduced. However, some false detections are still performed, but this is due the fact that some objects resemble humans. The proposed model is benchmarked with the current state-of-the-art detectors using a challenging test dataset, which is used to test the performance. The Receiver Operating Characteristic (ROC) curves for the precision-recall and true-positive rates are plotted to compare and evaluate the results. The proposed model outperforms most of the current state-of-the-art detectors.
Please use this identifier to cite or link to this item: