Consistency Checks for Language Model Forecasters

📅 2024-12-24
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Evaluating large language models (LLMs) on future-event prediction remains challenging due to the lack of real-time ground-truth verification. Method: This paper proposes an instantaneous quality assessment framework grounded in logical consistency—specifically, the probability arbitrage principle. It introduces an automated pipeline comprising question generation, logical relation modeling, and probabilistic consistency checking, integrated with proper scoring rules and batch evaluation interfaces. Contributions/Results: (1) We introduce the first generalized arbitrage-based consistency metric; (2) we release the first benchmark dataset enabling immediate measurability and long-term validity—the final evaluation results will be disclosed in 2028; (3) empirical analysis demonstrates strong correlation (ρ > 0.85) between our consistency score and future Brier scores, with millisecond-level inference latency across mainstream LLM predictors. The code and benchmark dataset are publicly available.

Technology Category

Application Category

📝 Abstract
Forecasting is a task that is difficult to evaluate: the ground truth can only be known in the future. Recent work showing LLM forecasters rapidly approaching human-level performance begs the question: how can we benchmark and evaluate these forecasters instantaneously? Following the consistency check framework, we measure the performance of forecasters in terms of the consistency of their predictions on different logically-related questions. We propose a new, general consistency metric based on arbitrage: for example, if a forecasting AI illogically predicts that both the Democratic and Republican parties have 60% probability of winning the 2024 US presidential election, an arbitrageur can trade against the forecaster's predictions and make a profit. We build an automated evaluation system that generates a set of base questions, instantiates consistency checks from these questions, elicits the predictions of the forecaster, and measures the consistency of the predictions. We then build a standard, proper-scoring-rule forecasting benchmark, and show that our (instantaneous) consistency metrics correlate with LLM forecasters' ground truth Brier scores (which are only known in the future). We also release a consistency benchmark that resolves in 2028, providing a long-term evaluation tool for forecasting.
Problem

Research questions and friction points this paper is trying to address.

Language Model
Coherence
Accuracy
Innovation

Methods, ideas, or system contributions that make the work stand out.

Coherence Evaluation
Predictive Accuracy
Longitudinal Validation
🔎 Similar Papers
No similar papers found.