🤖 AI Summary
Existing LLM factuality evaluation methods predominantly focus on prompt perturbations and aggregate performance metrics, overlooking the intrinsic uncertainty inherent in the generative process. Method: We propose the Fact Robustness Score (FRS), the first metric that jointly incorporates token-level distributional entropy and temperature-scaling sensitivity to quantify the stability of factual knowledge under decoding-condition perturbations. Contribution/Results: Evaluated across closed-book QA benchmarks—SQuAD, TriviaQA, and HotpotQA—on models of varying scales, FRS increases significantly with model size (0.76 → 0.93). Moreover, accuracy drops by up to 60% in high-entropy or high-temperature regimes, demonstrating strong predictive power of entropy and temperature for factual correctness. This work establishes a novel, interpretable, and computationally tractable paradigm for assessing the robustness of factual knowledge in LLMs.
📝 Abstract
Ensuring the robustness of factual knowledge in LLMs is critical for reliable applications in tasks such as question answering and reasoning. However, existing evaluation methods predominantly focus on performance-based metrics, often investigating from the perspective of prompt perturbations, which captures only the externally triggered side of knowledge robustness. To bridge this gap, we introduce a principled approach to measure factual robustness from the perspective of the generation process by analyzing token distribution entropy in combination with temperature scaling sensitivity. These two factors build the Factual Robustness Score (FRS), a novel metric which quantifies the stability of a fact against perturbations in decoding conditions, given its initial uncertainty. To validate our approach, we conduct extensive experiments on 5 LLMs across 3 closed-book QA datasets (SQuAD, TriviaQA, and HotpotQA). We show that factual robustness varies significantly -- smaller models report an FRS of $0.76$, larger ones $0.93$ -- with accuracy degrading by ~$60%$ under increased uncertainty. These insights demonstrate how entropy and temperature scaling impact factual accuracy, and lay a foundation for developing more robust knowledge retention and retrieval in future models.