Auto-FSL: Searching the Attribute Consistent Network for Few-Shot Learning

Publisher:
Institute of Electrical and Electronics Engineers
Publication Type:
Journal Article
Citation:
IEEE Transactions on Circuits and Systems for Video Technology, 2022, 32, (3), pp. 1213-1223
Issue Date:
2022-01-01
Filename Description Size
Auto-FSL Searching the Attribute Consistent Network for Few-Shot Learning.pdfPublished version3.24 MB
Adobe PDF
Full metadata record
Prevailing deep methods for image recognition require massive labeled samples in each visual category for training. However, large amounts of data annotations are time-consuming, and some uncommon categories only have rare samples available. For this issue, we focus on more challenging few-shot learning (FSL) task, where just few labeled images are used in the training stage. Existing FSL models are constructed with various convolutional neural networks (CNNs), which are trained on an auxiliary base dataset and evaluated for new few-shot predictions on a novel dataset. The performance of these models is difficult to break through because of the domain shift between base and novel datasets and the monotonous network architectures. Considering that, we propose a novel automatic attribute consistent network called Auto-ACNet to overcome the above problems. On one hand, Auto-ACNet utilizes the attribute information about base and novel categories to guide the procedure of representation learning. It introduces the consistent and non-consistent subnets to capture the common and different attributes of image pair, which helps to mitigate the domain shift problem. On the other hand, the architecture of Auto-ACNet is searched with the popular neural architecture search (NAS) technique DARTS, for obtaining a superior FSL network automatically. And the DARTS’s search space is improved by adding the position-aware module to extract the attribute characteristics better. Extensive experimental results on two datasets indicate that the proposed Auto-ACNet achieves significant improvement over the state-of-the-art competitors in this literature.
Please use this identifier to cite or link to this item: