Algorithm-Dependent Generalization Bounds for Multi-Task Learning
- Publication Type:
- Journal Article
- Citation:
- IEEE Transactions on Pattern Analysis and Machine Intelligence, 2017, 39 (2), pp. 227 - 241
- Issue Date:
- 2017-02-01
Closed Access
Filename | Description | Size | |||
---|---|---|---|---|---|
C:\Users\135854\Downloads\Algorithm-Dependent Generalization Bounds for Multi-Task Learning..pdf | Published Version | 315.87 kB |
Copyright Clearance Process
- Recently Added
- In Progress
- Closed Access
This item is closed access and not available.
© 1979-2012 IEEE. Often, tasks are collected for multi-Task learning (MTL) because they share similar feature structures. Based on this observation, in this paper, we present novel algorithm-dependent generalization bounds for MTL by exploiting the notion of algorithmic stability. We focus on the performance of one particular task and the average performance over multiple tasks by analyzing the generalization ability of a common parameter that is shared in MTL. When focusing on one particular task, with the help of a mild assumption on the feature structures, we interpret the function of the other tasks as a regularizer that produces a specific inductive bias. The algorithm for learning the common parameter, as well as the predictor, is thereby uniformly stable with respect to the domain of the particular task and has a generalization bound with a fast convergence rate of order mathcal {O}(1/n), where is the sample size of the particular task. When focusing on the average performance over multiple tasks, we prove that a similar inductive bias exists under certain conditions on the feature structures. Thus, the corresponding algorithm for learning the common parameter is also uniformly stable with respect to the domains of the multiple tasks, and its generalization bound is of the order mathcal {O}(1/T), where T is the number of tasks. These theoretical analyses naturally show that the similarity of feature structures in MTL will lead to specific regularizations for predicting, which enables the learning algorithms to generalize fast and correctly from a few examples.
Please use this identifier to cite or link to this item: