LiveSearchBench: An Automatically Constructed Benchmark for Retrieval and Reasoning over Dynamic Knowledge

📅 2025-11-03
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing QA evaluation benchmarks suffer from static, memory-centric designs that inadequately assess models’ real-time retrieval and multi-hop reasoning over dynamic knowledge. To address this, we introduce DKBench—the first fully automated, scalable benchmark for evaluating dynamic knowledge reasoning. DKBench constructs temporally grounded questions by differentiating Wikidata snapshots across multiple time points; questions undergo rigorous filtering via triple quality assessment, SPARQL logic validation, and natural-language synthesis, yielding a high-quality, verifiable dataset spanning one- to three-hop reasoning. Its core innovation shifts the evaluation paradigm from static fact memorization to external knowledge-base–dependent real-time retrieval and temporal multi-hop inference. Experiments reveal substantial performance degradation in mainstream LLMs on recent facts and multi-hop queries; retrieval augmentation and instruction tuning yield only marginal improvements—highlighting dynamic knowledge reasoning as a critical unsolved challenge.

Technology Category

Application Category

📝 Abstract
Evaluating large language models (LLMs) on question answering often relies on static benchmarks that reward memorization and understate the role of retrieval, failing to capture the dynamic nature of world knowledge. We present LiveSearchBench, an automated pipeline for constructing retrieval-dependent benchmarks from recent knowledge updates. Our method computes deltas between successive Wikidata snapshots, filters candidate triples for quality, and synthesizes natural-language questions at three levels of reasoning difficulty, each guaranteed to admit a unique, verifiable answer through SPARQL validation. The pipeline is fully automated, scalable across time, and minimizes human intervention, enabling continual regeneration of temporally grounded benchmarks. Experiments show a pronounced performance drop when models confront facts that post-date pretraining, with the gap most salient on multi-hop queries. Retrieval augmented methods and larger, instruction-tuned models provide partial gains but fail to close this recency gap. By design, LiveSearchBench shifts evaluation from static memorization toward tasks that require up-to-date retrieval and reasoning, offering a foundation for systematic, long-term assessment of LLMs under evolving knowledge.
Problem

Research questions and friction points this paper is trying to address.

Evaluating LLMs on dynamic knowledge beyond static memorization benchmarks
Automating benchmark construction from evolving knowledge sources like Wikidata
Assessing retrieval-augmented reasoning with temporally grounded unique answers
Innovation

Methods, ideas, or system contributions that make the work stand out.

Automated pipeline constructs retrieval-dependent benchmarks from knowledge updates
Method computes deltas between Wikidata snapshots and synthesizes questions
Pipeline enables continual regeneration of temporally grounded benchmarks
🔎 Similar Papers
No similar papers found.
Heng Zhou
Heng Zhou
Jiangnan University
Multi-modal LearningImage ProcessingComputer VisionRemote Sensing
A
Ao Yu
University of Science and Technology of China
Yuchen Fan
Yuchen Fan
Shanghai AI Laboratory & Shanghai Jiao Tong University
NLPLarge Language ModelsEvaluation
J
Jianing Shi
London School of Economics
L
Li Kang
Shanghai AI Laboratory
Hejia Geng
Hejia Geng
Researcher @ Oxford
Y
Yongting Zhang
University of Science and Technology of China
Y
Yutao Fan
Harbin Institute of Technology
Y
Yuhao Wu
SUTD
T
Tiancheng He
BUPT
Y
Yiran Qin
University of Oxford
Lei Bai
Lei Bai
Shanghai AI Laboratory
Foundation ModelScience IntelligenceMulti-Agent SystemAutonomous Discovery
Zhenfei Yin
Zhenfei Yin
University of Oxford
Deep LearningMultimodalAI AgentRobotics