🤖 AI Summary
This work addresses the critical challenge of hallucinated citations in scientific texts generated by large language models, which pose a serious threat to academic integrity and are difficult to detect manually. To this end, the authors introduce the first benchmark and verification framework specifically designed for detecting fabricated references in scientific writing. The framework features a novel unified metric for evaluating citation faithfulness and evidence alignment, supported by a large-scale, cross-domain dataset rigorously validated by human annotators. Methodologically, it proposes an interpretable and scalable multi-agent verification pipeline that integrates claim extraction, evidence retrieval, passage matching, and reasoning calibration to enable end-to-end auditing of citation authenticity. Experimental results demonstrate that the proposed approach significantly outperforms existing methods in both accuracy and interpretability, effectively identifying citation errors produced by state-of-the-art large language models and offering a reliable tool for scientific publishing integrity.
📝 Abstract
Scientific research relies on accurate citation for attribution and integrity, yet large language models (LLMs) introduce a new risk: fabricated references that appear plausible but correspond to no real publications. Such hallucinated citations have already been observed in submissions and accepted papers at major machine learning venues, exposing vulnerabilities in peer review. Meanwhile, rapidly growing reference lists make manual verification impractical, and existing automated tools remain fragile to noisy and heterogeneous citation formats and lack standardized evaluation. We present the first comprehensive benchmark and detection framework for hallucinated citations in scientific writing. Our multi-agent verification pipeline decomposes citation checking into claim extraction, evidence retrieval, passage matching, reasoning, and calibrated judgment to assess whether a cited source truly supports its claim. We construct a large-scale human-validated dataset across domains and define unified metrics for citation faithfulness and evidence alignment. Experiments with state-of-the-art LLMs reveal substantial citation errors and show that our framework significantly outperforms prior methods in both accuracy and interpretability. This work provides the first scalable infrastructure for auditing citations in the LLM era and practical tools to improve the trustworthiness of scientific references.