๐ค AI Summary
Fine-tuning video-language models often degrades generalization to unseen categories, while existing prompt tuning methods compromise soft prompt learning capability when mitigating catastrophic forgetting. To address this, we propose a plug-and-play coupled prompt learning framework: it integrates pretrained hard prompts with learnable mapping layers for soft prompts and constructs universal semantic anchors using irrelevant video clips and negative prompts to alleviate semantic space collapse. Our method jointly optimizes text- and vision-modality soft prompts, transfers hard prompts across datasets, designs negative-sample prompts, and refines the mapping networkโthereby balancing learnability and generalization. Experiments on multiple video understanding benchmarks demonstrate that our approach significantly outperforms state-of-the-art prompt tuning methods, especially in zero-shot and few-shot transfer from base to novel classes, where it achieves superior generalization performance.
๐ Abstract
Visual and textual soft prompt tuning can effectively improve the adaptability of Vision-Language Models (VLMs) in downstream tasks. However, fine-tuning on video tasks impairs the model's generalization ability to unseen classes. Existing methods attempt to mitigate this forgetting effect by regularizing the gap between hand-crafted prompts and soft prompts, but this also weakens the learning ability of soft prompts. To address this challenge, we propose a plug-and-play coupling prompt learning framework to optimize the generalization performance of V-L models in video tasks, with the core motivation of mitigating semantic space narrowing during fine-tuning by introducing an externally supervised prompt. Specifically, for textual prompts, we introduce pre-trained prompts from other datasets as hard prompt tokens. These are concatenated with soft prompt tokens and coupled via a learnable mapping layer. This competitive prompting approach prevents the semantic space from overfitting to supervised categories. In addition, we introduce a set of well-designed irrelevant video sets and negative prompts as generic attribute anchors to maintain the generic relevance of the attributes in the pre-trained semantic space, thus preserving the generalization ability. Experiments on video tasks demonstrate that our method significantly outperforms state-of-the-art prompt tuning approaches across generalization benchmarks, particularly on base-to-new class prediction.