Multi-Task Learning: Effective Weighting Algorithms and Model Parameterization
- Publication Type:
- Thesis
- Issue Date:
- 2024
Open Access
Copyright Clearance Process
- Recently Added
- In Progress
- Open Access
This item is open access.
Multi-Task Learning (MTL) is a widely used paradigm for training shared models across multiple tasks. It improves data efficiency by jointly training all tasks, allowing the model to leverage shared information. However, directly optimizing the mean of the losses across tasks can lead to imbalanced performance due to conflicts in task objectives. These conflicts arise because different tasks often compete for the same model capacity, which can degrade the performance of certain tasks. While many MTL models attempt to address this issue by employing loss weighting techniques or through model parameterization, the challenge of effectively balancing these competing objectives remains unresolved. Thus, the question of how to design more effective task weighting methods or model parameterization methods for MTL continues to be a key challenge in the field. To solve these problems, this thesis presents a series of novel optimization techniques, task-weighting algorithms, and parameterized models, contributing to the development of more effective multi-task learning systems.
Please use this identifier to cite or link to this item: