Certified Self-Consistency: Statistical Guarantees and Test-Time Training for Reliable Reasoning in LLMs

📅 2025-10-20
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Large language models lack theoretical justification and statistical guarantees for unsupervised inference methods—such as self-consistency majority voting and test-time reinforcement learning. Method: We propose a unified, certifiable inference framework that establishes, for the first time, a formal theoretical connection between self-consistency and test-time reinforcement learning. Our approach introduces the Martingale Majority Certificate—a dynamic stopping rule grounded in martingale theory—integrated with finite-sample concentration inequalities, anytime-valid inference, exponentially tilted distribution modeling, and label-free post-training optimization. Contribution: We derive quantifiable, high-probability confidence bounds on correctness, substantially reducing the number of required samples for certification. This improves both inference efficiency and stability, providing the first statistically rigorous and practically applicable certification foundation for reliable unsupervised reasoning.

Technology Category

Application Category

📝 Abstract
Recent advances such as self-consistency and test-time reinforcement learning (TTRL) improve the reliability of large language models (LLMs) without additional supervision, yet their underlying mechanisms and statistical guarantees remain poorly understood. We present a unified framework for certifiable inference in LLMs, showing that majority voting provides a statistical certificate of self-consistency: under mild assumptions, the aggregated answer coincides with the mode of the model's terminal distribution with high probability. We derive finite-sample and anytime-valid concentration bounds that quantify this confidence, and introduce the Martingale Majority Certificate (MMC), a sequential stopping rule that adaptively determines when sufficient samples have been drawn. We further prove that label-free post-training methods such as TTRL implicitly sharpen the answer distribution by exponentially tilting it toward its mode, thereby reducing the number of samples required for certification. Building on this insight, we propose new post-training objectives that explicitly optimise this trade-off between sharpness and bias. Together, these results explain and connect two central test-time scaling strategies, self-consistency and TTRL, within a single statistical framework for label-free, certifiable reliability in reasoning LLMs.
Problem

Research questions and friction points this paper is trying to address.

Providing statistical guarantees for self-consistency in LLM reasoning
Developing adaptive certification methods for reliable LLM inference
Connecting test-time scaling strategies within unified statistical framework
Innovation

Methods, ideas, or system contributions that make the work stand out.

Majority voting provides statistical certificate for self-consistency
Martingale Majority Certificate adaptively determines sampling sufficiency
Post-training methods sharpen distribution to reduce certification samples
P
Paula Cordero-Encinar
Department of Mathematics, Imperial College London, UK.
Andrew B. Duncan
Andrew B. Duncan
Imperial College London
Stochastic ComputationMachine LearningComputational Statistics