🤖 AI Summary
This paper identifies two fundamental flaws in evaluating large language models (LLMs) as predictive tools: (1) temporal information leakage—leading to unreliable, overly optimistic assessments—and (2) poor generalizability of lab-based performance to real-world forecasting scenarios. To address these, the authors conduct the first systematic taxonomy and empirical investigation of temporal leakage pitfalls (e.g., implicit exposure to future information) and real-world extrapolation failures in LLM forecasting evaluation. They advocate a “methodological rigor–first” principle and propose a case-driven diagnostic framework comprising temporal consistency auditing and cross-task transferability analysis. Applying this framework, the study uncovers previously undetected temporal leakage vulnerabilities in multiple state-of-the-art forecasting studies, undermining claims of “human-surpassing” performance due to insufficiently robust evaluation protocols. The work establishes a new paradigm and practical guidelines for credible, temporally sound assessment of LLMs’ predictive capabilities.
📝 Abstract
Large language models (LLMs) have recently been applied to forecasting tasks, with some works claiming these systems match or exceed human performance. In this paper, we argue that, as a community, we should be careful about such conclusions as evaluating LLM forecasters presents unique challenges. We identify two broad categories of issues: (1) difficulty in trusting evaluation results due to many forms of temporal leakage, and (2) difficulty in extrapolating from evaluation performance to real-world forecasting. Through systematic analysis and concrete examples from prior work, we demonstrate how evaluation flaws can raise concerns about current and future performance claims. We argue that more rigorous evaluation methodologies are needed to confidently assess the forecasting abilities of LLMs.