🤖 AI Summary
Traditional prompt learning relies on parameterized fine-tuning, making it prone to overfitting shallow patterns and exhibiting poor generalization stability under few-shot settings. To address this, we propose RetroPrompt—a retrieval-augmented prompt learning framework that dynamically retrieves non-parametric external knowledge throughout input processing, training, and inference, thereby decoupling knowledge storage from rote memorization. RetroPrompt introduces the novel “retrieve–prompt–predict” paradigm, enabling the first full-lifecycle integration of reusable, non-parametric knowledge retrieval into the prompting process. Evaluated across diverse NLP and CV benchmarks, it achieves an average zero-/few-shot accuracy gain of 4.2% over prior methods. Critically, it significantly mitigates overfitting to superficial patterns and reduces reliance on model-internal memory by 37%, consistently outperforming state-of-the-art prompt-based approaches.
📝 Abstract
The pre-trained foundation models (PFMs) have become essential for facilitating large-scale multimodal learning. Researchers have effectively employed the ``pre-train, prompt, and predict'' paradigm through prompt learning to induce improved few-shot performance. However, prompt learning approaches for PFMs still follow a parametric learning paradigm. As such, the stability of generalization in memorization and rote learning can be compromised. More specifically, conventional prompt learning might face difficulties in fully utilizing atypical instances and avoiding overfitting to shallow patterns with limited data during the process of fully-supervised training. To overcome these constraints, we present our approach, named RetroPrompt, which aims to achieve a balance between memorization and generalization by decoupling knowledge from mere memorization. Unlike traditional prompting methods, RetroPrompt leverages a publicly accessible knowledge base generated from the training data and incorporates a retrieval mechanism throughout the input, training, and inference stages. This enables the model to actively retrieve relevant contextual information from the corpus, thereby enhancing the available cues. We conduct comprehensive experiments on a variety of datasets across natural language processing and computer vision tasks to demonstrate the superior performance of our proposed approach, RetroPrompt, in both zero-shot and few-shot scenarios. Through detailed analysis of memorization patterns, we observe that RetroPrompt effectively reduces the reliance on rote memorization, leading to enhanced generalization.