Proof of Time: A Benchmark for Evaluating Scientific Idea Judgments

📅 2026-01-12
📈 Citations: 1
Influential: 0
📄 PDF
🤖 AI Summary
Existing approaches struggle to scalably evaluate the quality of large language models’ judgments on scientific ideas. This work proposes the Prediction-over-Time (PoT) benchmark framework, which constructs offline sandboxes via temporal slicing and predicts subsequent observable signals—such as citation counts or shifts in research agendas—based solely on evidence available up to a frozen cutoff date, thereby enabling verifiable evaluation without extensive expert annotations. Integrating tool-augmented agents, prompt ablation, and budget-scaling strategies, PoT is validated across over 30,000 instances spanning four scientific domains. Results show that increasing interaction budgets generally improves performance over non-agent baselines, while the efficacy of tool use is highly task-dependent. The framework further supports analyses of human–agent judgment alignment and enables controlled evaluations of agent-based scientific reviewing.

Technology Category

Application Category

📝 Abstract
Large language models are increasingly being used to assess and forecast research ideas, yet we lack scalable ways to evaluate the quality of models'judgments about these scientific ideas. Towards this goal, we introduce PoT, a semi-verifiable benchmarking framework that links scientific idea judgments to downstream signals that become observable later (e.g., citations and shifts in researchers'agendas). PoT freezes a pre-cutoff snapshot of evidence in an offline sandbox and asks models to forecast post-cutoff outcomes, enabling verifiable evaluation when ground truth arrives, scalable benchmarking without exhaustive expert annotation, and analysis of human-model misalignment against signals such as peer-review awards. In addition, PoT provides a controlled testbed for agent-based research judgments that evaluate scientific ideas, comparing tool-using agents to non-agent baselines under prompt ablations and budget scaling. Across 30,000+ instances spanning four benchmark domains, we find that, compared with non-agent baselines, higher interaction budgets generally improve agent performance, while the benefit of tool use is strongly task-dependent. By combining time-partitioned, future-verifiable targets with an offline sandbox for tool use, PoT supports scalable evaluation of agents on future-facing scientific idea judgment tasks.
Problem

Research questions and friction points this paper is trying to address.

scientific idea judgment
large language models
evaluation benchmark
forecasting research impact
agent-based evaluation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Proof of Time
scientific idea judgment
agent-based evaluation
time-partitioned benchmarking
offline sandbox
🔎 Similar Papers
No similar papers found.