C-LoRA: Contextual Low-Rank Adaptation for Uncertainty Estimation in Large Language Models

πŸ“… 2025-05-23
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
LoRA-based few-shot fine-tuning often yields overconfident predictions, and existing uncertainty-aware approaches neglect the influence of input features on uncertainty estimation. To address this, we propose Context-Aware Lightweight LoRA (CALoRA), the first method to explicitly incorporate input features into the posterior distribution over low-rank adaptation parameters. CALoRA integrates Bayesian low-rank decomposition, context-conditioned parameter generation, and a lightweight neural modulator to enable sample-wise adaptive uncertainty calibration. Extensive experiments across diverse few-shot benchmarks demonstrate that CALoRA significantly outperforms prior uncertainty-aware LoRA methods, achieving superior calibration, generalization, and robustness. Ablation studies confirm that the context-aware module is pivotal to these gains.

Technology Category

Application Category

πŸ“ Abstract
Low-Rank Adaptation (LoRA) offers a cost-effective solution for fine-tuning large language models (LLMs), but it often produces overconfident predictions in data-scarce few-shot settings. To address this issue, several classical statistical learning approaches have been repurposed for scalable uncertainty-aware LoRA fine-tuning. However, these approaches neglect how input characteristics affect the predictive uncertainty estimates. To address this limitation, we propose Contextual Low-Rank Adaptation ( extbf{C-LoRA}) as a novel uncertainty-aware and parameter efficient fine-tuning approach, by developing new lightweight LoRA modules contextualized to each input data sample to dynamically adapt uncertainty estimates. Incorporating data-driven contexts into the parameter posteriors, C-LoRA mitigates overfitting, achieves well-calibrated uncertainties, and yields robust predictions. Extensive experiments demonstrate that C-LoRA consistently outperforms the state-of-the-art uncertainty-aware LoRA methods in both uncertainty quantification and model generalization. Ablation studies further confirm the critical role of our contextual modules in capturing sample-specific uncertainties. C-LoRA sets a new standard for robust, uncertainty-aware LLM fine-tuning in few-shot regimes.
Problem

Research questions and friction points this paper is trying to address.

Overconfident predictions in few-shot LoRA fine-tuning
Neglecting input impact on uncertainty estimates
Need for robust uncertainty-aware LLM adaptation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Dynamic lightweight LoRA modules per input
Data-driven contexts in parameter posteriors
Sample-specific uncertainty capture via contextual modules
πŸ”Ž Similar Papers
A
Amir Hossein Rahmati
Texas A&M University, College Station, TX
S
Sanket R. Jantre
Brookhaven National Laboratory, Upton, NY
W
Weifeng Zhang
Texas A&M University, College Station, TX
Yucheng Wang
Yucheng Wang
ETH ZΓΌrich
Multimodal LLMSpeech UnderstandingHuman-Computer Interaction
Byung-Jun Yoon
Byung-Jun Yoon
Texas A&M University, Brookhaven National Laboratory
Optimal Experimental DesignAI for ScienceBioinformaticsComputational Network Biology
N
Nathan M. Urban
Brookhaven National Laboratory, Upton, NY
Xiaoning Qian
Xiaoning Qian
Texas A&M University
Computational network biologygenomic signal processingbiomedical image analysis