Bias-variance analysis for ensembling regularized multiple criteria linear programming models
- Publication Type:
- Journal Article
- Citation:
- Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 2009, 5545 LNCS (PART 2), pp. 524 - 533
- Issue Date:
- 2009-09-17
Closed Access
Filename | Description | Size | |||
---|---|---|---|---|---|
2013005147OK.pdf | 140.49 kB |
Copyright Clearance Process
- Recently Added
- In Progress
- Closed Access
This item is closed access and not available.
Regularized Multiple Criteria Linear Programming (RMCLP) models have recently shown to be effective for data classification. While the models are becoming increasingly important for data mining community, very little work has been done in systematically investigating RMCLP models from common machine learners' perspectives. The missing of such theoretical components leaves important questions like whether RMCLP is a strong and stable learner unable to be answered in practice. In this paper, we carry out a systematic investigation on RMCLP by using a well-known statistical analysis approach, bias-variance decomposition. We decompose RMCLP's error into three parts: bias error, variance error and noise error. Our experiments and observations conclude that RMCLP'error mainly comes from its bias error, whereas its variance error remains relatively low. Our observation asserts that RMCLP is stable but not strong. Consequently, employing boosting based ensembling mechanism RMCLP will mostly further improve the RMCLP models to a large extent. © 2009 Springer Berlin Heidelberg.
Please use this identifier to cite or link to this item: