π€ AI Summary
This work proposes INTRA, the first retrieval-free fact-checking paradigm that leverages the intrinsic knowledge and reasoning capabilities of large language models (LLMs) without relying on external retrieval. By modeling interactions among internal representations within an LLM and integrating logit comparison with representation space analysis, INTRA directly assesses the veracity of natural language claims using the modelβs parametric knowledge. Extensive experiments across nine datasets, three mainstream LLMs, and eighteen baseline methods demonstrate that INTRA substantially outperforms conventional logit-based approaches, exhibiting strong generalization and robustness across languages and claim sources.
π Abstract
Trustworthiness is a core research challenge for agentic AI systems built on Large Language Models (LLMs). To enhance trust, natural language claims from diverse sources, including human-written text, web content, and model outputs, are commonly checked for factuality by retrieving external knowledge and using an LLM to verify the faithfulness of claims to the retrieved evidence. As a result, such methods are constrained by retrieval errors and external data availability, while leaving the models intrinsic fact-verification capabilities largely unused. We propose the task of fact-checking without retrieval, focusing on the verification of arbitrary natural language claims, independent of their source. To study this setting, we introduce a comprehensive evaluation framework focused on generalization, testing robustness to (i) long-tail knowledge, (ii) variation in claim sources, (iii) multilinguality, and (iv) long-form generation. Across 9 datasets, 18 methods and 3 models, our experiments indicate that logit-based approaches often underperform compared to those that leverage internal model representations. Building on this finding, we introduce INTRA, a method that exploits interactions between internal representations and achieves state-of-the-art performance with strong generalization. More broadly, our work establishes fact-checking without retrieval as a promising research direction that can complement retrieval-based frameworks, improve scalability, and enable the use of such systems as reward signals during training or as components integrated into the generation process.