Deep Neural Networks for Multi-Source Transfer Learning

Publication Type:
Thesis
Issue Date:
2022
Full metadata record
Transfer learning is gaining incredible attention due to its ability to leverage previously acquired knowledge from source domain to assist in completing a task in a similar target domain. Many existing transfer learning methods deal with single source transfer learning, but rarely consider the fact that information from a single source can be inadequate to a target. In addition, most transfer learning methods assume that the source and target domains share the same label space. But in practice, the source domain(s) sharing the same label space with the target domain may never be found. Third, data privacy and security are being magnificently conspicuous in real-world applications, which means the traditional transfer learning relying on data matching cannot be applied due to privacy concerns. To solve the mentioned problems, this thesis develops a series of methods to tackle transfer learning with multiple source domains. To measure contributions of source domains, multi-source contribution learning and dynamic classifier alignment methods are developed. To define what to transfer, sample and source distillation method is proposed. To address transfer learning without the access to source data, generally auxiliary model and fuzzy rule-based model are explored under closed-set, partial and open-set settings. Finally, universal domain adaptation is exploited by designing a model which is flexible enough to multiple source domains with homogeneous and heterogeneous label spaces.
Please use this identifier to cite or link to this item: