Martingale Score: An Unsupervised Metric for Bayesian Rationality in LLM Reasoning

📅 2025-12-02
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Large language models (LLMs) frequently exhibit belief fixation and confirmation bias in open-ended reasoning, undermining their capacity for truth-seeking. To address this, we introduce the first unsupervised LLM evaluation metric grounded in Bayesian statistics—the Martingale Score—which quantifies belief rigidity by measuring the unpredictability of belief evolution trajectories, requiring no ground-truth labels to assess reasoning rationality. The method is validated across diverse open-domain tasks—including event forecasting, value-laden judgment, and paper review—demonstrating robust detection of pervasive belief fixation and a strong correlation between the Martingale Score and final answer accuracy. Thus, the score serves as a reliable proxy for rational inference capability. Our core contribution is the establishment of the first unsupervised LLM reasoning evaluation framework founded on principles of probabilistic rationality.

Technology Category

Application Category

📝 Abstract
Recent advances in reasoning techniques have substantially improved the performance of large language models (LLMs), raising expectations for their ability to provide accurate, truthful, and reliable information. However, emerging evidence suggests that iterative reasoning may foster belief entrenchment and confirmation bias, rather than enhancing truth-seeking behavior. In this study, we propose a systematic evaluation framework for belief entrenchment in LLM reasoning by leveraging the Martingale property from Bayesian statistics. This property implies that, under rational belief updating, the expected value of future beliefs should remain equal to the current belief, i.e., belief updates are unpredictable from the current belief. We propose the unsupervised, regression-based Martingale Score to measure violations of this property, which signal deviation from the Bayesian ability of updating on new evidence. In open-ended problem domains including event forecasting, value-laden questions, and academic paper review, we find such violations to be widespread across models and setups, where the current belief positively predicts future belief updates, a phenomenon which we term belief entrenchment. We identify the models, reasoning techniques, and domains more prone to belief entrenchment. Finally, we validate the Martingale Score by showing that it predicts ground-truth accuracy on problem domains where ground truth labels are available. This indicates that, while designed as an unsupervised metric that operates even in domains without access to ground truth, the Martingale Score is a useful proxy of the truth-seeking ability of a reasoning process.
Problem

Research questions and friction points this paper is trying to address.

Measures belief entrenchment in LLM reasoning processes
Evaluates deviation from Bayesian rational belief updating
Identifies models and domains prone to confirmation bias
Innovation

Methods, ideas, or system contributions that make the work stand out.

Unsupervised regression-based Martingale Score metric
Leverages Martingale property from Bayesian statistics
Measures belief entrenchment in LLM reasoning processes
🔎 Similar Papers
No similar papers found.