π€ AI Summary
This work addresses the challenges of accuracy and scalability in knowledge graph fact verification at scale, where existing automated methods remain immature. We propose the first multidimensional evaluation framework for assessing large language models (LLMs) in this context, systematically examining their capabilities along three dimensions: internal knowledge, retrieval-augmented generation (RAG), and multi-model consensus. To support this evaluation, we construct a RAG dataset comprising two million documents, a FactCheck benchmark, and an interactive analysis platform, conducting experiments across three real-world knowledge graphs. Our results reveal that while LLMs show promise, their performance lacks robustness; furthermore, the effectiveness of RAG and multi-model strategies varies significantly, underscoring both the necessity of systematic evaluation and the practical utility of our proposed framework.
π Abstract
Knowledge Graphs (KGs) store structured factual knowledge by linking entities through relationships, crucial for many applications. These applications depend on the KG's factual accuracy, so verifying facts is essential, yet challenging. Expert manual verification is ideal but impractical on a large scale. Automated methods show promise but are not ready for real-world KGs. Large Language Models (LLMs) offer potential with their semantic understanding and knowledge access, yet their suitability and effectiveness for KG fact validation remain largely unexplored. In this paper, we introduce FactCheck, a benchmark designed to evaluate LLMs for KG fact validation across three key dimensions: (1) LLMs internal knowledge; (2) external evidence via Retrieval-Augmented Generation (RAG); and (3) aggregated knowledge employing a multi-model consensus strategy. We evaluated open-source and commercial LLMs on three diverse real-world KGs. FactCheck also includes a RAG dataset with 2+ million documents tailored for KG fact validation. Additionally, we offer an interactive exploration platform for analyzing verification decisions. The experimental analyses demonstrate that while LLMs yield promising results, they are still not sufficiently stable and reliable to be used in real-world KG validation scenarios. Integrating external evidence through RAG methods yields fluctuating performance, providing inconsistent improvements over more streamlined approaches -- at higher computational costs. Similarly, strategies based on multi-model consensus do not consistently outperform individual models, underscoring the lack of a one-fits-all solution. These findings further emphasize the need for a benchmark like FactCheck to systematically evaluate and drive progress on this difficult yet crucial task.