Bayesian Principles Improve Prompt Learning In Vision-Language Models

📅 2025-04-19
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Prompt learning for vision-language models (e.g., CLIP) suffers from overfitting and poor generalization under limited fine-tuning data. Method: This work introduces Bayesian inference into prompt learning for the first time, modeling parameter priors—induced by the pre-trained model—and posteriors—corresponding to learnable prompts—in the logits space, and balancing them via variational inference. Contribution/Results: The approach theoretically controls the trade-off between task adaptation and generalization without strong dependence on downstream data. Evaluated across multiple cross-modal few-shot benchmarks, it achieves an average accuracy gain of 3.2%, alongside significantly improved robustness and training stability. To our knowledge, this is the first logits-level regularization paradigm for prompt learning grounded in Bayesian principles.

Technology Category

Application Category

📝 Abstract
Prompt learning is a popular fine-tuning method for vision-language models due to its efficiency. It requires a small number of additional learnable parameters while significantly enhancing performance on target tasks. However, most existing methods suffer from overfitting to fine-tuning data, yielding poor generalizability. To address this, we propose a new training objective function based on a Bayesian learning principle to balance adaptability and generalizability. We derive a prior over the logits, where the mean function is parameterized by the pre-trained model, while the posterior corresponds to the fine-tuned model. This objective establishes a balance by allowing the fine-tuned model to adapt to downstream tasks while remaining close to the pre-trained model.
Problem

Research questions and friction points this paper is trying to address.

Address overfitting in prompt learning for vision-language models
Balance adaptability and generalizability using Bayesian principles
Improve performance while maintaining proximity to pre-trained models
Innovation

Methods, ideas, or system contributions that make the work stand out.

Bayesian learning principle balances adaptability and generalizability
Prior logits parameterized by pre-trained model
Posterior corresponds to fine-tuned model
🔎 Similar Papers
No similar papers found.