GLAD: Generalizable Tuning for Vision-Language Models

πŸ“… 2025-07-17
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
Visual-language models (e.g., CLIP) suffer from overfitting and poor cross-task/cross-domain generalization in few-shot prompt tuning. To address this, we propose GLAD, a General, Lightweight, and Adaptive Tuning framework. GLAD synergistically integrates Low-Rank Adaptation (LoRA) with gradient regularization: LoRA reduces trainable parameters to mitigate overfitting, while gradient regularization explicitly constrains optimization directions to enhance robustness against data distribution shifts. Crucially, GLAD requires no architectural modifications to the backbone or task-specific modulesβ€”only a minimal number of additional parameters are introduced for stable fine-tuning. Extensive experiments across 15 benchmark datasets demonstrate that GLAD consistently outperforms state-of-the-art prompt tuning and adapter methods under three challenging scenarios: base-to-novel class transfer, cross-image-domain generalization, and cross-dataset transfer. GLAD thus achieves an effective balance among efficiency, generality, and strong generalization capability.

Technology Category

Application Category

πŸ“ Abstract
Pre-trained vision-language models, such as CLIP, show impressive zero-shot recognition ability and can be easily transferred to specific downstream tasks via prompt tuning, even with limited training data. However, existing prompt tuning methods face two main challenges: (1) In few-shot scenarios, data scarcity often leads to overfitting, making the model sensitive to changes in the input domain. (2) To mitigate overfitting, these methods typically rely on complex task-specific model architectures and sensitive hyperparameter tuning, severely restricting their general applicability. To address these issues, we propose a simpler and more general framework called GLAD (Generalizable LoRA tuning with RegulArized GraDient). We show that merely applying LoRA achieves performance in downstream tasks comparable to current state-of-the-art prompt-based methods. While LoRA is effective and easy to use, it remains susceptible to overfitting in few-shot learning scenarios. To mitigate this risk, we introduce a gradient-based regularization technique. This technique effectively steers the optimization trajectory, encouraging the model to find a more stable parameter region that is robust to variations in data distribution. Through extensive experiments conducted on 15 benchmark datasets, we demonstrate that GLAD outperforms previous tuning approaches in terms of base-to-novel class generalization, image domain generalization, and cross-dataset generalization. The code will be publicly available.
Problem

Research questions and friction points this paper is trying to address.

Overfitting in few-shot vision-language model tuning
Complex task-specific architectures limit general applicability
Need for robust gradient-based regularization technique
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses LoRA for efficient model tuning
Applies gradient-based regularization technique
Enhances robustness in few-shot learning
πŸ”Ž Similar Papers
No similar papers found.