🤖 AI Summary
Prompt learning for vision-language models (e.g., CLIP) suffers from overfitting and poor generalization under limited fine-tuning data. Method: This work introduces Bayesian inference into prompt learning for the first time, modeling parameter priors—induced by the pre-trained model—and posteriors—corresponding to learnable prompts—in the logits space, and balancing them via variational inference. Contribution/Results: The approach theoretically controls the trade-off between task adaptation and generalization without strong dependence on downstream data. Evaluated across multiple cross-modal few-shot benchmarks, it achieves an average accuracy gain of 3.2%, alongside significantly improved robustness and training stability. To our knowledge, this is the first logits-level regularization paradigm for prompt learning grounded in Bayesian principles.
📝 Abstract
Prompt learning is a popular fine-tuning method for vision-language models due to its efficiency. It requires a small number of additional learnable parameters while significantly enhancing performance on target tasks. However, most existing methods suffer from overfitting to fine-tuning data, yielding poor generalizability. To address this, we propose a new training objective function based on a Bayesian learning principle to balance adaptability and generalizability. We derive a prior over the logits, where the mean function is parameterized by the pre-trained model, while the posterior corresponds to the fine-tuned model. This objective establishes a balance by allowing the fine-tuned model to adapt to downstream tasks while remaining close to the pre-trained model.