Self-Aware Knowledge Probing: Evaluating Language Models'Relational Knowledge through Confidence Calibration

📅 2026-01-26
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses a critical limitation in existing knowledge probing methods, which focus solely on language models’ prediction accuracy while neglecting the reliability reflected in their confidence calibration. We propose the first framework that integrates confidence calibration into knowledge probing, evaluating the reliability of relational knowledge along three dimensions: intrinsic confidence, structural consistency, and semantic anchoring. Through large-scale experiments on both causal and masked language models—augmented with paraphrase consistency analysis and semantic confidence expression modeling—we reveal a pervasive overconfidence issue, particularly pronounced in masked pre-trained models. Notably, confidence estimates incorporating paraphrase inconsistency yield the most effective calibration, exposing a fundamental deficit in models’ semantic understanding of linguistic expressions of confidence.

Technology Category

Application Category

📝 Abstract
Knowledge probing quantifies how much relational knowledge a language model (LM) has acquired during pre-training. Existing knowledge probes evaluate model capabilities through metrics like prediction accuracy and precision. Such evaluations fail to account for the model's reliability, reflected in the calibration of its confidence scores. In this paper, we propose a novel calibration probing framework for relational knowledge, covering three modalities of model confidence: (1) intrinsic confidence, (2) structural consistency and (3) semantic grounding. Our extensive analysis of ten causal and six masked language models reveals that most models, especially those pre-trained with the masking objective, are overconfident. The best-calibrated scores come from confidence estimates that account for inconsistencies due to statement rephrasing. Moreover, even the largest pre-trained models fail to encode the semantics of linguistic confidence expressions accurately.
Problem

Research questions and friction points this paper is trying to address.

knowledge probing
confidence calibration
relational knowledge
language models
model reliability
Innovation

Methods, ideas, or system contributions that make the work stand out.

confidence calibration
knowledge probing
relational knowledge
language models
semantic grounding
🔎 Similar Papers
No similar papers found.
C
Christopher Kissling
Humboldt-Universität zu Berlin
E
Elena Merdjanovska
Humboldt-Universität zu Berlin, Science of Intelligence
Alan Akbik
Alan Akbik
Humboldt-Universität zu Berlin
Natural Language ProcessingMachine LearningLanguage ModelingInformation Extraction