Algorithmic stability and hypothesis complexity

Publication Type:
Conference Proceeding
34th International Conference on Machine Learning, ICML 2017, 2017, 5 pp. 3413 - 3421
Issue Date:
Full metadata record
Files in This Item:
Filename Description Size
liu17c.pdfPublished version236.33 kB
Adobe PDF
© 2017 by the author(s). We introduce a notion of algorithmic stability of learning algorithms-that we term argument stability-that captures stability of the hypothesis output by the learning algorithm in the normed space of functions from which hypotheses are selected. The main result of the paper bounds the generalization error of any learning algorithm in terms of its argument stability. The bounds are based on martingale inequalities in the Banach space to which the hypotheses belong. We apply the general bounds to bound the performance of some learning algorithms based on empirical risk minimization and stochastic gradient descent.
Please use this identifier to cite or link to this item: