Fine-Grained Detection of Context-Grounded Hallucinations Using LLMs

📅 2025-09-26
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the limitations of large language models (LLMs) in localizing *context-dependent hallucinations*—i.e., generated content unsupported by the source text. To overcome constraints of coarse-grained, label-based hallucination detection, we propose a fine-grained, free-text hallucination representation scheme. We introduce the first high-quality, human-annotated benchmark specifically designed for LLM hallucination localization. Furthermore, we design an automated, LLM-based evaluation protocol rigorously validated through human assessment. Experiments across four state-of-the-art LLM families show that the best-performing model achieves only 0.67 F1-score, highlighting a fundamental challenge: current LLMs struggle to distinguish *unverifiable yet correct* statements from *factually incorrect* ones. Our key contributions are threefold: (1) a novel fine-grained hallucination representation paradigm; (2) the first dedicated, human-curated benchmark for hallucination localization; and (3) a human-validated, automated evaluation framework for rigorous, scalable assessment.

Technology Category

Application Category

📝 Abstract
Context-grounded hallucinations are cases where model outputs contain information not verifiable against the source text. We study the applicability of LLMs for localizing such hallucinations, as a more practical alternative to existing complex evaluation pipelines. In the absence of established benchmarks for meta-evaluation of hallucinations localization, we construct one tailored to LLMs, involving a challenging human annotation of over 1,000 examples. We complement the benchmark with an LLM-based evaluation protocol, verifying its quality in a human evaluation. Since existing representations of hallucinations limit the types of errors that can be expressed, we propose a new representation based on free-form textual descriptions, capturing the full range of possible errors. We conduct a comprehensive study, evaluating four large-scale LLMs, which highlights the benchmark's difficulty, as the best model achieves an F1 score of only 0.67. Through careful analysis, we offer insights into optimal prompting strategies for the task and identify the main factors that make it challenging for LLMs: (1) a tendency to incorrectly flag missing details as inconsistent, despite being instructed to check only facts in the output; and (2) difficulty with outputs containing factually correct information absent from the source - and thus not verifiable - due to alignment with the model's parametric knowledge.
Problem

Research questions and friction points this paper is trying to address.

Detecting unverifiable information in model outputs using LLMs
Creating benchmarks for evaluating hallucination localization methods
Analyzing LLM limitations in distinguishing factual inconsistencies
Innovation

Methods, ideas, or system contributions that make the work stand out.

LLMs localize hallucinations via free-form descriptions
New benchmark with human-annotated examples for evaluation
Analysis reveals optimal prompting and error patterns
🔎 Similar Papers
No similar papers found.