Poly-FEVER: A Multilingual Fact Verification Benchmark for Hallucination Detection in Large Language Models

๐Ÿ“… 2025-03-19
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
Existing hallucination detection benchmarks are heavily English-centric, lacking systematic evaluation of large language modelsโ€™ (LLMs) hallucination generation and detection capabilities across multilingual settings. To address this gap, we introduce Poly-FEVERโ€”the first large-scale multilingual fact verification benchmark, covering 11 languages and comprising 77,973 human-verified claims. It is constructed via cross-lingual alignment and re-annotation of FEVER, Climate-FEVER, and SciFact. Poly-FEVER enables the first systematic cross-lingual analysis of hallucination patterns, revealing how topic distribution and web resource accessibility influence hallucination frequency, and uncovering significant language-specific biases. The benchmark has been employed to evaluate multilingual hallucination in mainstream models including ChatGPT and LLaMA, thereby advancing trustworthy and linguistically inclusive AI. The dataset is publicly released on Hugging Face.

Technology Category

Application Category

๐Ÿ“ Abstract
Hallucinations in generative AI, particularly in Large Language Models (LLMs), pose a significant challenge to the reliability of multilingual applications. Existing benchmarks for hallucination detection focus primarily on English and a few widely spoken languages, lacking the breadth to assess inconsistencies in model performance across diverse linguistic contexts. To address this gap, we introduce Poly-FEVER, a large-scale multilingual fact verification benchmark specifically designed for evaluating hallucination detection in LLMs. Poly-FEVER comprises 77,973 labeled factual claims spanning 11 languages, sourced from FEVER, Climate-FEVER, and SciFact. It provides the first large-scale dataset tailored for analyzing hallucination patterns across languages, enabling systematic evaluation of LLMs such as ChatGPT and the LLaMA series. Our analysis reveals how topic distribution and web resource availability influence hallucination frequency, uncovering language-specific biases that impact model accuracy. By offering a multilingual benchmark for fact verification, Poly-FEVER facilitates cross-linguistic comparisons of hallucination detection and contributes to the development of more reliable, language-inclusive AI systems. The dataset is publicly available to advance research in responsible AI, fact-checking methodologies, and multilingual NLP, promoting greater transparency and robustness in LLM performance. The proposed Poly-FEVER is available at: https://huggingface.co/datasets/HanzhiZhang/Poly-FEVER.
Problem

Research questions and friction points this paper is trying to address.

Addresses multilingual hallucination detection gaps in LLMs
Evaluates model inconsistencies across diverse linguistic contexts
Analyzes language-specific biases affecting LLM accuracy
Innovation

Methods, ideas, or system contributions that make the work stand out.

Multilingual benchmark for hallucination detection
Large-scale dataset with 77,973 labeled claims
Analyzes topic and resource biases across languages
๐Ÿ”Ž Similar Papers
No similar papers found.