🤖 AI Summary
To address context hallucination—i.e., the generation of contradictory or unsupported content by large language models (LLMs) when processing long-context inputs—this paper introduces the first dedicated hallucination detection dataset for long-context scenarios and proposes a lightweight, decomposition-aggregation-based detection architecture. The architecture uniquely adapts pretrained encoders (e.g., BERT) to ultra-long text modeling via chunked encoding, cross-chunk aggregation, and contrastive hallucination discrimination learning—achieving high detection accuracy without compromising inference efficiency. Experiments demonstrate that the proposed method significantly outperforms same-scale baselines and LLM-based detectors across multiple metrics, while accelerating inference by several-fold. These results validate its effectiveness and practicality for long-context hallucination detection.
📝 Abstract
Large Language Models (LLMs) have demonstrated remarkable performance across various tasks. However, they are prone to contextual hallucination, generating information that is either unsubstantiated or contradictory to the given context. Although many studies have investigated contextual hallucinations in LLMs, addressing them in long-context inputs remains an open problem. In this work, we take an initial step toward solving this problem by constructing a dataset specifically designed for long-context hallucination detection. Furthermore, we propose a novel architecture that enables pre-trained encoder models, such as BERT, to process long contexts and effectively detect contextual hallucinations through a decomposition and aggregation mechanism. Our experimental results show that the proposed architecture significantly outperforms previous models of similar size as well as LLM-based models across various metrics, while providing substantially faster inference.