Feature selection with biased sample distributions
- Publication Type:
- Conference Proceeding
- Citation:
- 2009 IEEE International Conference on Information Reuse and Integration, IRI 2009, 2009, pp. 23 - 28
- Issue Date:
- 2009-11-17
Closed Access
Filename | Description | Size | |||
---|---|---|---|---|---|
2009001676OK.pdf | 1.02 MB |
Copyright Clearance Process
- Recently Added
- In Progress
- Closed Access
This item is closed access and not available.
Feature selection concerns the problem of selecting a number of important features (w.r.t. the class labels) in order to build accurate prediction models. Traditional feature selection methods, however, fail to take the sample distributions into the consideration which may lead to poor predictions for minority class examples. Due to the sophistication and the cost involved in the data collection process, many applications, such as Biomedical research, commonly face biased data collections with one class of examples (e.g., diseased samples) significantly less than other classes (e.g., normal samples). For these applications, the minority class examples, such as disease samples, credit card frauds, and network intrusions, are only a small portion of the data collections but deserve full attentions for accurate prediction. In this paper, we propose three filtering techniques, Higher Weight (HW), Differential Minority Repeat (DMR) and Balanced Minority Repeat (BMR), to identify important features from biased data collections. Experimental comparisons with the ReliefF method on five datasets demonstrate the effectiveness of the proposed methods in selecting informative features from data with biased sample distributions.
Please use this identifier to cite or link to this item: