🤖 AI Summary
Existing LLM evaluation benchmarks are static and outdated, failing to capture how model performance evolves over time. Method: We propose Daily Oracle—the first dynamic benchmark that automatically constructs temporally ordered question-answer pairs from daily news, enabling continuous assessment of LLMs’ future-event prediction capability and temporal generalization. Our pipeline integrates real-time news crawling, structured event extraction, temporally grounded QA generation, and cross-timestamp performance tracking, with systematic evaluation of RAG-enhanced variants. Contributions/Results: (1) LLM prediction accuracy degrades significantly as the recency of pretraining data decreases; (2) RAG yields only marginal improvements and cannot reverse this decay; (3) We quantitatively establish a strong correlation between pretraining data staleness and performance deterioration—first such empirical characterization. These findings underscore the necessity of continual model updating to preserve temporal robustness.
📝 Abstract
Many existing evaluation benchmarks for Large Language Models (LLMs) quickly become outdated due to the emergence of new models and training data. These benchmarks also fall short in assessing how LLM performance changes over time, as they consist of static questions without a temporal dimension. To address these limitations, we propose using future event prediction as a continuous evaluation method to assess LLMs' temporal generalization and forecasting abilities. Our benchmark, Daily Oracle, automatically generates question-answer (QA) pairs from daily news, challenging LLMs to predict"future"event outcomes. Our findings reveal that as pre-training data becomes outdated, LLM performance degrades over time. While Retrieval Augmented Generation (RAG) has the potential to enhance prediction accuracy, the performance degradation pattern persists, highlighting the need for continuous model updates.