π€ AI Summary
This study identifies and systematically categorizes pervasive temporal leakage mechanisms in web-based retrieval that critically undermine the validity of retroactive forecasting evaluations. Focusing on widely used search engine date filters (e.g., Googleβs βbefore:β operator), the authors demonstrate that such methods introduce severe temporal contamination, leading to inflated performance estimates. Through a combination of manual auditing and automated analysis using a large language model (gpt-oss-120b), they find that 71% of retrieved documents exhibit some form of temporal leakage, with 41% directly revealing the answer. Consequently, the inclusion of these leaked documents artificially reduces the Brier score from 0.242 to 0.108, substantially distorting assessment reliability and exposing fundamental flaws in date-filtered retroactive evaluation protocols.
π Abstract
Search-engine date filters are widely used to enforce pre-cutoff retrieval in retrospective evaluations of search-augmented forecasters. We show this approach is unreliable: auditing Google Search with a before: filter, 71% of questions return at least one page containing strong post-cutoff leakage, and for 41%, at least one page directly reveals the answer. Using a large language model (LLM), gpt-oss-120b, to forecast with these leaky documents, we demonstrate an inflated prediction accuracy (Brier score 0.108 vs. 0.242 with leak-free documents). We characterize common leakage mechanisms, including updated articles, related-content modules, unreliable metadata/timestamps, and absence-based signals, and argue that date-restricted search is insufficient for temporal evaluation. We recommend stronger retrieval safeguards or evaluation on frozen, time-stamped web snapshots to ensure credible retrospective forecasting.