Restoring Calibration for Aligned Large Language Models: A Calibration-Aware Fine-Tuning Approach

📅 2025-05-04
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Preference alignment improves LLMs’ human alignment but severely degrades calibration—manifesting as overconfidence and increased Expected Calibration Error (ECE). This work identifies “preference collapse” as the core mechanism underlying this calibration deterioration. To address it, we propose a calibration-aware fine-tuning paradigm: (i) injecting domain knowledge to enhance semantic reliability, and (ii) introducing an EM algorithm-driven explicit ECE regularization. Furthermore, we establish a theoretical dichotomy between calibratable and non-calibratable models, providing principled guidance for calibration intervention. Evaluated across multiple benchmarks, our method reduces ECE by 30–65% over strong baselines—including temperature scaling and TS-IR—while preserving downstream task accuracy. This represents the first systematic investigation linking preference alignment to calibration degradation and offers both mechanistic insight and a practical, theoretically grounded solution for calibrated preference-aligned LLMs.

Technology Category

Application Category

📝 Abstract
One of the key technologies for the success of Large Language Models (LLMs) is preference alignment. However, a notable side effect of preference alignment is poor calibration: while the pre-trained models are typically well-calibrated, LLMs tend to become poorly calibrated after alignment with human preferences. In this paper, we investigate why preference alignment affects calibration and how to address this issue. For the first question, we observe that the preference collapse issue in alignment undesirably generalizes to the calibration scenario, causing LLMs to exhibit overconfidence and poor calibration. To address this, we demonstrate the importance of fine-tuning with domain-specific knowledge to alleviate the overconfidence issue. To further analyze whether this affects the model's performance, we categorize models into two regimes: calibratable and non-calibratable, defined by bounds of Expected Calibration Error (ECE). In the calibratable regime, we propose a calibration-aware fine-tuning approach to achieve proper calibration without compromising LLMs' performance. However, as models are further fine-tuned for better performance, they enter the non-calibratable regime. For this case, we develop an EM-algorithm-based ECE regularization for the fine-tuning loss to maintain low calibration error. Extensive experiments validate the effectiveness of the proposed methods.
Problem

Research questions and friction points this paper is trying to address.

Addresses poor calibration in aligned Large Language Models
Investigates preference alignment's impact on model calibration
Proposes calibration-aware fine-tuning to maintain performance
Innovation

Methods, ideas, or system contributions that make the work stand out.

Calibration-aware fine-tuning for aligned LLMs
EM-algorithm-based ECE regularization
Domain-specific knowledge alleviates overconfidence
🔎 Similar Papers
No similar papers found.
Jiancong Xiao
Jiancong Xiao
University of Pennsylvania
Learning TheoryStatisticsOptimization
Bojian Hou
Bojian Hou
Meta
Machine LearningArtificial IntelligenceTrustworthy (Gen)AILarge Language ModelHealthTech
Z
Zhanliang Wang
University of Pennsylvania, PA, USA
R
Ruochen Jin
University of Pennsylvania, PA, USA
Qi Long
Qi Long
Professor, University of Pennsylvania
Data ScienceBiostatisticsMachine LearningArtificial Intelligence
W
Weijie J. Su
University of Pennsylvania, PA, USA
L
Li Shen
University of Pennsylvania, PA, USA