Leveraging LLM Parametric Knowledge for Fact Checking without Retrieval

πŸ“… 2026-03-05
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
This work proposes INTRA, the first retrieval-free fact-checking paradigm that leverages the intrinsic knowledge and reasoning capabilities of large language models (LLMs) without relying on external retrieval. By modeling interactions among internal representations within an LLM and integrating logit comparison with representation space analysis, INTRA directly assesses the veracity of natural language claims using the model’s parametric knowledge. Extensive experiments across nine datasets, three mainstream LLMs, and eighteen baseline methods demonstrate that INTRA substantially outperforms conventional logit-based approaches, exhibiting strong generalization and robustness across languages and claim sources.

Technology Category

Application Category

πŸ“ Abstract
Trustworthiness is a core research challenge for agentic AI systems built on Large Language Models (LLMs). To enhance trust, natural language claims from diverse sources, including human-written text, web content, and model outputs, are commonly checked for factuality by retrieving external knowledge and using an LLM to verify the faithfulness of claims to the retrieved evidence. As a result, such methods are constrained by retrieval errors and external data availability, while leaving the models intrinsic fact-verification capabilities largely unused. We propose the task of fact-checking without retrieval, focusing on the verification of arbitrary natural language claims, independent of their source. To study this setting, we introduce a comprehensive evaluation framework focused on generalization, testing robustness to (i) long-tail knowledge, (ii) variation in claim sources, (iii) multilinguality, and (iv) long-form generation. Across 9 datasets, 18 methods and 3 models, our experiments indicate that logit-based approaches often underperform compared to those that leverage internal model representations. Building on this finding, we introduce INTRA, a method that exploits interactions between internal representations and achieves state-of-the-art performance with strong generalization. More broadly, our work establishes fact-checking without retrieval as a promising research direction that can complement retrieval-based frameworks, improve scalability, and enable the use of such systems as reward signals during training or as components integrated into the generation process.
Problem

Research questions and friction points this paper is trying to address.

fact-checking
retrieval-free
Large Language Models
trustworthiness
intrinsic knowledge
Innovation

Methods, ideas, or system contributions that make the work stand out.

retrieval-free fact-checking
parametric knowledge
internal representations
INTRA
LLM trustworthiness
πŸ”Ž Similar Papers
No similar papers found.