🤖 AI Summary
This study addresses the pervasive issue of hallucinated citation URLs generated by commercial large language models and deep research agents, where many cited links either never existed or are unresolvable, severely undermining credibility. We present the first large-scale quantitative analysis of this phenomenon, systematically evaluating ten models and agents on the DRBench and ExpertQA datasets, and introduce a taxonomy for categorizing citation failures. To mitigate this problem, we propose a detection and self-correction method leveraging the Wayback Machine archive to validate URLs. Our open-source tool, urlhealth, automatically distinguishes hallucinated links from ordinary broken ones and attempts repairs. Experimental results demonstrate that applying our approach reduces non-resolvable citation URLs by 6–79×, lowering their prevalence to under 1% and substantially enhancing citation reliability.
📝 Abstract
Large language models and deep research agents supply citation URLs to support their claims, yet the reliability of these citations has not been systematically measured. We address six research questions about citation URL validity using 10 models and agents on DRBench (53,090 URLs) and 3 models on ExpertQA (168,021 URLs across 32 academic fields). We find that 3--13\% of citation URLs are hallucinated -- they have no record in the Wayback Machine and likely never existed -- while 5--18\% are non-resolving overall. Deep research agents generate substantially more citations per query than search-augmented LLMs but hallucinate URLs at higher rates. Domain effects are pronounced: non-resolving rates range from 5.4\% (Business) to 11.4\% (Theology), with per-model effects even larger. Decomposing failures reveals that some models fabricate every non-resolving URL, while others show substantial link-rot fractions indicating genuine retrieval. As a solution, we release urlhealth, an open-source tool for URL liveness checking and stale-vs-hallucinated classification using the Wayback Machine. In agentic self-correction experiments, models equipped with urlhealth reduce non-resolving citation URLs by $6\textrm{--}79\times$ to under 1\%, though effectiveness depends on the model's tool-use competence. The tool and all data are publicly available. Our characterization findings, failure taxonomy, and open-source tooling establish that citation URL validity is both measurable at scale and correctable in practice.