Feature Selection with Biased Sample Distributions

IEEE Computer Society
Publication Type:
Conference Proceeding
Proc. of the IEEE International Conference on Information Reuse (IRI-09), 2009, pp. 23 - 28
Issue Date:
Full metadata record
Files in This Item:
Filename Description Size
Thumbnail2009001676OK.pdf1.02 MB
Adobe PDF
Feature selection concerns the problem of selecting a number of important features (w.r.t. the class labels) in order to build accurate prediction models. Traditional feature selection methods, however, fail to take the sample distributions into the consideration which may lead to poor predictions for minority class examples. Due to the sophistication and the cost involved in the data collection process, many applications, such as Biomedical research, commonly face biased data collections with one class of examples (e.g., diseased samples) significantly less than other classes (e.g., normal samples). For these applications, the minority class examples, such as disease samples, credit card frauds, and network intrusions, are only a small portion of the data collections but deserve full attentions for accurate prediction. In this paper, we propose three filtering techniques, Higher Weight (HW), Differential Minority Repeat (DMR) and Balanced Minority Repeat (BMR), to identify important features from biased data collections. Experimental comparisons with the ReliefF method on five datasets demonstrate the effectiveness of the proposed methods in selecting informative features from data with biased sample distributions.
Please use this identifier to cite or link to this item: