An Adversarial Meta-training Framework for Cross-domain Few-Shot Learning

Publisher:
Institute of Electrical and Electronics Engineers (IEEE)
Publication Type:
Journal Article
Citation:
IEEE Transactions on Multimedia, 2022, PP, (99), pp. 1-12
Issue Date:
2022-01-01
Filename Description Size
An_Adversarial_Meta-training_Framework_for_Cross-domain_Few-Shot_Learning.pdfPublished version4.9 MB
Adobe PDF
Full metadata record
Meta-learning provides a promising way for deep learning models to efficiently learn in few-shot learning. With this capacity, many deep learning systems can be applied in many real applications. However, many existing meta-learning based few-shot learning systems suffer from vulnerable generalization when new tasks are from unseen domains (a.k.a, cross-domain few-shot learning). In this work, we consider this problem from the perspective of designing a model-agnostic meta-training framework to improve the generalization of existing meta-learning methods in cross-domain few-shot learning. In this way, compared with focusing on elaborately designing modules for a specific meta-learning model, our method is endowed with the ability to be compatible with different meta-learning models in various few-shot problems. To achieve this goal, a novel adversarial meta-training framework is proposed. The proposed framework utilizes max-min episodic iteration. In the episode of maximization, our framework focuses on how to dynamically generate appropriate pseudo tasks which benefit learning cross-domain knowledge. In the episode of minimization, our method aims to solve how to help meta-learning model learn cross-task and robust meta-knowledge. To comprehensively evaluate our framework, experiments are conducted on two few-shot learning settings, three meta-learning models, and eight datasets. These results demonstrate that our method is applicable to various meta-learning models in different few-shot learning problems. The superiority of our method is verified compared with existing state-of-the-art methods.
Please use this identifier to cite or link to this item: