🤖 AI Summary
This paper addresses catastrophic forgetting and inefficient parameter updates in text–audio cross-modal retrieval under incremental learning settings. We formally define the Text–Audio Incremental Learning (TAIL) task—the first of its kind—and propose PTAT, a lightweight prompt tuning framework that jointly enforces cross-modal similarity constraints and feature distillation to mitigate knowledge degradation. PTAT fine-tunes only 0.242% of model parameters—significantly fewer than full-parameter tuning—yet achieves an average 4.46% gain in retrieval performance across four major benchmarks (including AudioCaps), while substantially improving retention of previously learned tasks. Our core contributions are threefold: (1) the formalization of TAIL as a novel incremental learning benchmark for multimodal retrieval; (2) the design of PTAT, integrating prompt-based adaptation with cross-modal consistency and distillation; and (3) empirical validation that lightweight prompt tuning is highly effective for multimodal incremental learning.
📝 Abstract
Many studies combine text and audio to capture multi-modal information but they overlook the model's generalization ability on new datasets. Introducing new datasets may affect the feature space of the original dataset, leading to catastrophic forgetting. Meanwhile, large model parameters can significantly impact training performance. To address these limitations, we introduce a novel task called Text-Audio Incremental Learning (TAIL) task for text-audio retrieval, and propose a new method, PTAT, Prompt Tuning for Audio-Text incremental learning. This method utilizes prompt tuning to optimize the model parameters while incorporating an audio-text similarity and feature distillation module to effectively mitigate catastrophic forgetting. We benchmark our method and previous incremental learning methods on AudioCaps, Clotho, BBC Sound Effects and Audioset datasets, and our method outperforms previous methods significantly, particularly demonstrating stronger resistance to forgetting on older datasets. Compared to the full-parameters Finetune (Sequential) method, our model only requires 2.42% of its parameters, achieving 4.46% higher performance.