🤖 AI Summary
Parameter-efficient fine-tuning (e.g., LoRA) often yields poorly calibrated uncertainty estimates for large language models (LLMs) in classification tasks.
Method: We propose AdUE, a lightweight post-hoc calibration method that jointly introduces differentiable maximum approximation and L2-SP anchored regularization—without modifying the base model or introducing extra parameters or inference overhead. AdUE is fully compatible with both softmax confidence calibration and LoRA adaptation.
Results: Extensive experiments across five NLP classification benchmarks and four mainstream LLMs demonstrate that AdUE consistently outperforms baselines—including Mahalanobis distance and raw softmax scores—in three key aspects: (i) reduced expected calibration error (ECE), (ii) improved confidence–accuracy alignment, and (iii) enhanced out-of-distribution robustness. Notably, AdUE achieves these gains while preserving the parameter efficiency and deployment simplicity of LoRA.
📝 Abstract
Uncertainty estimation remains a critical challenge in adapting pre-trained language models to classification tasks, particularly under parameter-efficient fine-tuning approaches such as adapters. We introduce AdUE1, an efficient post-hoc uncertainty estimation (UE) method, to enhance softmax-based estimates. Our approach (1) uses a differentiable approximation of the maximum function and (2) applies additional regularization through L2-SP, anchoring the fine-tuned head weights and regularizing the model. Evaluations on five NLP classification datasets across four language models (RoBERTa, ELECTRA, LLaMA-2, Qwen) demonstrate that our method consistently outperforms established baselines such as Mahalanobis distance and softmax response. Our approach is lightweight (no base-model changes) and produces better-calibrated confidence.