Graph-based few-shot learning with transformed feature propagation and optimal class allocation

Publisher:
Elsevier
Publication Type:
Journal Article
Citation:
Neurocomputing, 2022, 470, pp. 247-256
Issue Date:
2022-01-22
Filename Description Size
1-s2.0-S0925231221016672-main.pdfPublished version1.22 MB
Adobe PDF
Full metadata record
Graph neural network has shown impressive ability to capture relations among support(labeled) and query(unlabeled) instances in a few-shot task. It is a feasible way that features are extracted using a pre-trained backbone network, and later adjusted in a few-shot scenario with an episodic meta-trained graph network. However, these adjusted features cannot well represent the few-shot data characteristics owing to the feature distribution mis-match caused by the different optimizations between the backbone and the graph network (multi-class pre-train v.s. episodic meta-train). Additionally, learning from the limited support instances fails to depict true data distributions thus cause incorrect class allocation. In this paper, we propose to transform the features extracted by a pre-trained self-supervised feature extractor into a Gaussian-like distribution to reduce the feature distribution mis-match, which significantly benefits the later meta-training of the graph network. To tackle the incorrect class allocation, we propose to leverage support and query instances to estimate class centers by computing an optimal class allocation matrix. Extensive experiments on few-shot benchmarks demonstrate that our graph-based few-shot learning pipeline outperforms baseline by 12%, and surpasses state-of-the-art results by a large margin under both full-supervised and semi-supervised settings.
Please use this identifier to cite or link to this item: