Towards Objective Fine-tuning: How LLMs' Prior Knowledge Causes Potential Poor Calibration?

📅 2025-05-27
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Fine-tuned large language models (LLMs) frequently suffer from miscalibration—misalignment between predicted confidence and actual accuracy—primarily due to overlap between pre-trained knowledge and fine-tuning data: knowledge-consistent samples induce overconfidence, whereas novel knowledge improves calibration. Method: We identify this knowledge overlap as the core mechanism behind calibration degradation and propose CogCalib, a cognition-aware calibration framework. CogCalib dynamically classifies samples based on knowledge familiarity—measured via logit entropy and gradient sensitivity—and integrates curriculum learning with confidence regularization for adaptive fine-tuning. Contribution/Results: Evaluated across seven tasks and three LLM families (including Llama3-8B), CogCalib reduces expected calibration error (ECE) by 57% on average, generalizes robustly to out-of-domain tasks, and significantly enhances model objectivity and trustworthiness.

Technology Category

Application Category

📝 Abstract
Fine-tuned Large Language Models (LLMs) often demonstrate poor calibration, with their confidence scores misaligned with actual performance. While calibration has been extensively studied in models trained from scratch, the impact of LLMs' prior knowledge on calibration during fine-tuning remains understudied. Our research reveals that LLMs' prior knowledge causes potential poor calibration due to the ubiquitous presence of known data in real-world fine-tuning, which appears harmful for calibration. Specifically, data aligned with LLMs' prior knowledge would induce overconfidence, while new knowledge improves calibration. Our findings expose a tension: LLMs' encyclopedic knowledge, while enabling task versatility, undermines calibration through unavoidable knowledge overlaps. To address this, we propose CogCalib, a cognition-aware framework that applies targeted learning strategies according to the model's prior knowledge. Experiments across 7 tasks using 3 LLM families prove that CogCalib significantly improves calibration while maintaining performance, achieving an average 57% reduction in ECE compared to standard fine-tuning in Llama3-8B. These improvements generalize well to out-of-domain tasks, enhancing the objectivity and reliability of domain-specific LLMs, and making them more trustworthy for critical human-AI interaction applications.
Problem

Research questions and friction points this paper is trying to address.

LLMs' prior knowledge causes poor calibration during fine-tuning
Known data induces overconfidence, while new knowledge improves calibration
Need for targeted strategies to enhance calibration without losing performance
Innovation

Methods, ideas, or system contributions that make the work stand out.

CogCalib framework targets prior knowledge issues
Uses cognition-aware learning strategies for calibration
Reduces ECE by 57% in Llama3-8B
🔎 Similar Papers
No similar papers found.
Z
Ziming Wang
SKLCCSE, School of Computer Science and Engineering, Beihang University
Z
Zeyu Shi
SKLCCSE, School of Computer Science and Engineering, Beihang University
Haoyi Zhou
Haoyi Zhou
Associate Professor, Beihang University
Machine LearningData MiningTime-series
Shiqi Gao
Shiqi Gao
Beihang University
Qingyun Sun
Qingyun Sun
Assistant Professor, Beihang University
Data MiningGraph Machine LearningDeep Learning
J
Jianxin Li
SKLCCSE, School of Computer Science and Engineering, Beihang University; Zhongguancun Laboratory, Beijing