🤖 AI Summary
Existing QA evaluation benchmarks suffer from static, memory-centric designs that inadequately assess models’ real-time retrieval and multi-hop reasoning over dynamic knowledge. To address this, we introduce DKBench—the first fully automated, scalable benchmark for evaluating dynamic knowledge reasoning. DKBench constructs temporally grounded questions by differentiating Wikidata snapshots across multiple time points; questions undergo rigorous filtering via triple quality assessment, SPARQL logic validation, and natural-language synthesis, yielding a high-quality, verifiable dataset spanning one- to three-hop reasoning. Its core innovation shifts the evaluation paradigm from static fact memorization to external knowledge-base–dependent real-time retrieval and temporal multi-hop inference. Experiments reveal substantial performance degradation in mainstream LLMs on recent facts and multi-hop queries; retrieval augmentation and instruction tuning yield only marginal improvements—highlighting dynamic knowledge reasoning as a critical unsolved challenge.
📝 Abstract
Evaluating large language models (LLMs) on question answering often relies on static benchmarks that reward memorization and understate the role of retrieval, failing to capture the dynamic nature of world knowledge. We present LiveSearchBench, an automated pipeline for constructing retrieval-dependent benchmarks from recent knowledge updates. Our method computes deltas between successive Wikidata snapshots, filters candidate triples for quality, and synthesizes natural-language questions at three levels of reasoning difficulty, each guaranteed to admit a unique, verifiable answer through SPARQL validation. The pipeline is fully automated, scalable across time, and minimizes human intervention, enabling continual regeneration of temporally grounded benchmarks. Experiments show a pronounced performance drop when models confront facts that post-date pretraining, with the gap most salient on multi-hop queries. Retrieval augmented methods and larger, instruction-tuned models provide partial gains but fail to close this recency gap. By design, LiveSearchBench shifts evaluation from static memorization toward tasks that require up-to-date retrieval and reasoning, offering a foundation for systematic, long-term assessment of LLMs under evolving knowledge.