ProD: Prompting-to-disentangle Domain Knowledge for Cross-domain Few-shot Image Classification

Publisher:
IEEE
Publication Type:
Conference Proceeding
Citation:
2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2023, 00, pp. 19754-19763
Issue Date:
2023-06-24
Filename Description Size
1664707.pdfPublished version342.09 kB
Adobe PDF
Full metadata record
This paper considers few shot image classification under the cross domain scenario where the train to test domain gap compromises classification accuracy To mitigate the domain gap we propose a prompting to disentangle ProD method through a novel exploration with the prompting mechanism ProD adopts the popular multi domain training scheme and extracts the backbone feature with a standard Convolutional Neural Network Based on these two common practices the key point of ProD is using the prompting mechanism in the transformer to disentangle the domain general DG and domain specific DS knowledge from the backbone feature Specifically ProD concatenates a DG and a DS prompt to the backbone feature and feeds them into a lightweight transformer The DG prompt is learnable and shared by all the training domains while the DS prompt is generated from the domain of interest on the fly As a result the transformer outputs DG and DS features in parallel with the two prompts yielding the disentangling effect We show that 1 Simply sharing a single DG prompt for all the training domains already improves generalization towards the novel test domain 2 The cross domain generalization can be further reinforced by making the DG prompt neutral towards the training domains 3 When inference the DS prompt is generated from the support samples and can capture test domain knowledge through the prompting mechanism Combining all three benefits ProD significantly improves cross domain few shot classification For instance on CUB ProD improves the 5 way 5 shot ac curacy from 73 56 baseline to 79 19 setting a new state of the art
Please use this identifier to cite or link to this item: