🤖 AI Summary
This work addresses a critical limitation in existing knowledge probing methods, which focus solely on language models’ prediction accuracy while neglecting the reliability reflected in their confidence calibration. We propose the first framework that integrates confidence calibration into knowledge probing, evaluating the reliability of relational knowledge along three dimensions: intrinsic confidence, structural consistency, and semantic anchoring. Through large-scale experiments on both causal and masked language models—augmented with paraphrase consistency analysis and semantic confidence expression modeling—we reveal a pervasive overconfidence issue, particularly pronounced in masked pre-trained models. Notably, confidence estimates incorporating paraphrase inconsistency yield the most effective calibration, exposing a fundamental deficit in models’ semantic understanding of linguistic expressions of confidence.
📝 Abstract
Knowledge probing quantifies how much relational knowledge a language model (LM) has acquired during pre-training. Existing knowledge probes evaluate model capabilities through metrics like prediction accuracy and precision. Such evaluations fail to account for the model's reliability, reflected in the calibration of its confidence scores. In this paper, we propose a novel calibration probing framework for relational knowledge, covering three modalities of model confidence: (1) intrinsic confidence, (2) structural consistency and (3) semantic grounding. Our extensive analysis of ten causal and six masked language models reveals that most models, especially those pre-trained with the masking objective, are overconfident. The best-calibrated scores come from confidence estimates that account for inconsistencies due to statement rephrasing. Moreover, even the largest pre-trained models fail to encode the semantics of linguistic confidence expressions accurately.