A fine-grained self-adapting prompt learning approach for few-shot learning with pre-trained language models
- Publisher:
- ELSEVIER
- Publication Type:
- Journal Article
- Citation:
- Knowledge-Based Systems, 2024, 299
- Issue Date:
- 2024-09-05
Closed Access
Filename | Description | Size | |||
---|---|---|---|---|---|
1-s2.0-S0950705124006026-main.pdf | Published version | 2.36 MB |
Copyright Clearance Process
- Recently Added
- In Progress
- Closed Access
This item is closed access and not available.
Pre-trained language models have demonstrated remarkable performance in few-shot learning through the emergence of “prompt-based learning” methods, where the performance of these tasks highly rely on the quality of prompts. Existing prompt learning methods typically customize a single prompt to each few-shot learning task and all the examples in the task share the universal prompt. However, a fine-grained prompt design can enhance the performance of few-shot learning task by leveraging more diverse information hidden in the set of examples. In light of this motivation, this paper introduce an example-specific prompt learning method to embody fine-grained self-adapting prompts for few-shot learning with pre-trained models. Specifically, we introduce the concept of the “weak consistency assumption”, to trade-off the task-specific consistent and example-specific diversity. Based on this assumption, a novel method called Self-adapting Continuous Prompt Learning (SP-learning) to learn example-specific prompts is proposed. It employs a cross-attention prompt generator that considers the characteristics of input samples and utilizes a diversity calibration technique to adjust the prompt generator accordingly. By personalizing prompts for each example, SP-learning aims to improve few-shot learning performance. We perform a systematic evaluation on 10 public benchmark tasks and our method outperforms 8 of those tasks. Our research sheds light on the importance of personalized prompts and opens up new possibilities for improving few-shot learning tasks.
Please use this identifier to cite or link to this item: