On the robustness and generalization of Cauchy regression

Publication Type:
Conference Proceeding
Citation:
ICIST 2014 - Proceedings of 2014 4th IEEE International Conference on Information Science and Technology, 2014, pp. 100 - 105
Issue Date:
2014-01-01
Metrics:
Full metadata record
Files in This Item:
Filename Description Size
06920341A.pdf Published version145.59 kB
Adobe PDF
© 2014 IEEE. It was recently highlighted in a special issue of Nature [1] that the value of big data has yet to be effectively exploited for innovation, competition and productivity. To realize the full potential of big data, big learning algorithms need to be developed to keep pace with the continuous creation, storage and sharing of data. Least squares (LS) and least absolute deviation (LAD) have been successful regression tools used in business, government and society over the past few decades. However, these existing technologies are severely limited by noisy data because their breakdown points are both zero, i.e., they do not tolerate outliers. By appropriately setting the turning constant of Cauchy regression (CR), the maximum possible value (50%) of the breakdown point can be attained. CR therefore has the capability to learn a robust model from noisy big data. Although the theoretical analysis of the breakdown point for CR has been comprehensively investigated, we propose a new approach by interpreting the optimization of an objective function as a sample-weighted procedure. We therefore clearly show the differences of the robustness between LS, LAD and CR. We also study the statistical performance of CR. This study derives the generalization error bounds for CR by analyzing the covering number and Rademacher complexity of the hypothesis class, as well as showing how the scale parameter affects its performance.
Please use this identifier to cite or link to this item: