Feature selection for datasets with imbalanced class distributions

Publication Type:
Journal Article
Citation:
International Journal of Software Engineering and Knowledge Engineering, 2010, 20 (2), pp. 113 - 137
Issue Date:
2010-03-01
Filename Description Size
Thumbnail2011000590OK.pdf407.97 kB
Adobe PDF
Full metadata record
Feature selection for supervised learning concerns the problem of selecting a number of important features (w.r.t. the class labels) for the purposes of training accurate prediction models. Traditional feature selection methods, however, fail to take the sample distributions into consideration which may lead to poor prediction for minority class examples. Due to the sophistication and the cost involved in the data collection process, many applications, such as biomedical research, commonly face biased data collections with one class of examples (e.g., diseased samples) significantly less than other classes (e.g., normal samples). For these applications, the minority class examples, such as disease samples, credit card frauds, and network intrusions, are only a small portion of the data but deserve full attention for accurate prediction. In this paper, we propose three filtering techniques, Higher Weight (HW), Differential Minority Repeat (DMR) and Balanced Minority Repeat (BMR), to identify important features from datasets with biased sample distribution. Experimental comparisons with the ReliefF method on five datasets demonstrate the effectiveness of the proposed methods in selecting informative features for accurate prediction of minority class examples. © 2010 World Scientific Publishing Company.
Please use this identifier to cite or link to this item: