Bias-Variance Analysis for Ensembling Regularized Multiple Criteria Linear Programming Models

Publisher:
Elseviererlands
Publication Type:
Journal Article
Citation:
Lecture Notes in Computer Science, 2009, 5545 (1), pp. 524 - 533
Issue Date:
2009-01
Full metadata record
Files in This Item:
Filename Description Size
Thumbnail2013005147OK.pdf140.49 kB
Adobe PDF
Regularized Multiple Criteria Linear Programming (RMCLP) models have recently shown to be effective for data classification. While the models are becoming increasingly important for data mining community, very little work has been done in systematically investigating RMCLP models from common machine learners perspectives. The missing of such theoretical components leaves important questions like whether RMCLP is a strong and stable learner unable to be answered in practice. In this paper, we carry out a systematic investigation on RMCLP by using a well-known statistical analysis approach, bias-variance decomposition. We decompose RMCLPs error into three parts: bias error, variance error and noise error. Our experiments and observations conclude that RMCLPerror mainly comes from its bias error, whereas its variance error remains relatively low. Our observation asserts that RMCLP is stable but not strong. Consequently, employing boosting based ensembling mechanism RMCLP will mostly further improve the RMCLP models to a large extent.
Please use this identifier to cite or link to this item: