Improving LoRA with Variational Learning

📅 2025-06-17
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing Bayesian LoRA methods improve calibration but yield marginal or even negative gains in accuracy, while incurring high computational overhead and implementation complexity. To address this, we propose the first LoRA fine-tuning framework based on the IVON variational optimization algorithm—the first application of IVON to efficient large-model fine-tuning—and integrate it with posterior pruning. Our method achieves significant improvements in both accuracy and calibration at computational cost comparable to AdamW. On Llama-3.2-3B, it boosts commonsense reasoning accuracy by 1.3% and reduces expected calibration error (ECE) by 5.4%, outperforming AdamW, Laplace-LoRA, and BLoB across all metrics. Validated on billion-parameter-scale models, our approach breaks the long-standing accuracy–efficiency trade-off barrier inherent in Bayesian LoRA methods.

Technology Category

Application Category

📝 Abstract
Bayesian methods have recently been used to improve LoRA finetuning and, although they improve calibration, their effect on other metrics (such as accuracy) is marginal and can sometimes even be detrimental. Moreover, Bayesian methods also increase computational overheads and require additional tricks for them to work well. Here, we fix these issues by using a recently proposed variational algorithm called IVON. We show that IVON is easy to implement and has similar costs to AdamW, and yet it can also drastically improve many metrics by using a simple posterior pruning technique. We present extensive results on billion-scale LLMs (Llama and Qwen series) going way beyond the scale of existing applications of IVON. For example, we finetune a Llama-3.2-3B model on a set of commonsense reasoning tasks and improve accuracy over AdamW by 1.3% and reduce ECE by 5.4%, outperforming AdamW and other recent Bayesian methods like Laplace-LoRA and BLoB. Overall, our results show that variational learning with IVON can effectively improve LoRA finetuning.
Problem

Research questions and friction points this paper is trying to address.

Enhance LoRA finetuning with variational learning
Reduce computational overhead of Bayesian methods
Improve accuracy and calibration in large language models
Innovation

Methods, ideas, or system contributions that make the work stand out.

Using IVON for variational learning
Simple posterior pruning technique
Efficient implementation similar to AdamW
🔎 Similar Papers
No similar papers found.