π€ AI Summary
Visual-language models (e.g., CLIP) suffer from overfitting and poor cross-task/cross-domain generalization in few-shot prompt tuning. To address this, we propose GLAD, a General, Lightweight, and Adaptive Tuning framework. GLAD synergistically integrates Low-Rank Adaptation (LoRA) with gradient regularization: LoRA reduces trainable parameters to mitigate overfitting, while gradient regularization explicitly constrains optimization directions to enhance robustness against data distribution shifts. Crucially, GLAD requires no architectural modifications to the backbone or task-specific modulesβonly a minimal number of additional parameters are introduced for stable fine-tuning. Extensive experiments across 15 benchmark datasets demonstrate that GLAD consistently outperforms state-of-the-art prompt tuning and adapter methods under three challenging scenarios: base-to-novel class transfer, cross-image-domain generalization, and cross-dataset transfer. GLAD thus achieves an effective balance among efficiency, generality, and strong generalization capability.
π Abstract
Pre-trained vision-language models, such as CLIP, show impressive zero-shot recognition ability and can be easily transferred to specific downstream tasks via prompt tuning, even with limited training data. However, existing prompt tuning methods face two main challenges: (1) In few-shot scenarios, data scarcity often leads to overfitting, making the model sensitive to changes in the input domain. (2) To mitigate overfitting, these methods typically rely on complex task-specific model architectures and sensitive hyperparameter tuning, severely restricting their general applicability. To address these issues, we propose a simpler and more general framework called GLAD (Generalizable LoRA tuning with RegulArized GraDient). We show that merely applying LoRA achieves performance in downstream tasks comparable to current state-of-the-art prompt-based methods. While LoRA is effective and easy to use, it remains susceptible to overfitting in few-shot learning scenarios. To mitigate this risk, we introduce a gradient-based regularization technique. This technique effectively steers the optimization trajectory, encouraging the model to find a more stable parameter region that is robust to variations in data distribution. Through extensive experiments conducted on 15 benchmark datasets, we demonstrate that GLAD outperforms previous tuning approaches in terms of base-to-novel class generalization, image domain generalization, and cross-dataset generalization. The code will be publicly available.