Martingale Foresight Sampling: A Principled Approach to Inference-Time LLM Decoding

📅 2026-01-21
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Standard autoregressive decoding in large language models suffers from myopic behavior, hindering the exploration of globally optimal reasoning paths. This work proposes a novel inference framework that systematically integrates martingale theory into the decoding process, formulating it as an optimal stochastic process search problem. By leveraging the Doob decomposition, the optional stopping theorem, and the martingale convergence theorem, the authors develop a theoretically grounded sampling strategy that unifies path evaluation, pruning, and termination into a single optimization objective. The resulting method achieves significant performance gains across six reasoning benchmarks, simultaneously improving both accuracy and computational efficiency compared to existing approaches.

Technology Category

Application Category

📝 Abstract
Standard autoregressive decoding in large language models (LLMs) is inherently short-sighted, often failing to find globally optimal reasoning paths due to its token-by-token generation process. While inference-time strategies like foresight sampling attempt to mitigate this by simulating future steps, they typically rely on ad-hoc heuristics for valuing paths and pruning the search space. This paper introduces Martingale Foresight Sampling (MFS), a principled framework that reformulates LLM decoding as a problem of identifying an optimal stochastic process. By modeling the quality of a reasoning path as a stochastic process, we leverage Martingale theory to design a theoretically-grounded algorithm. Our approach replaces heuristic mechanisms with principles from probability theory: step valuation is derived from the Doob Decomposition Theorem to measure a path's predictable advantage, path selection uses Optional Stopping Theory for principled pruning of suboptimal candidates, and an adaptive stopping rule based on the Martingale Convergence Theorem terminates exploration once a path's quality has provably converged. Experiments on six reasoning benchmarks demonstrate that MFS surpasses state-of-the-art methods in accuracy while significantly improving computational efficiency. Code will be released at https://github.com/miraclehetech/EACL2026-Martingale-Foresight-Sampling.
Problem

Research questions and friction points this paper is trying to address.

autoregressive decoding
foresight sampling
reasoning paths
stochastic process
Martingale theory
Innovation

Methods, ideas, or system contributions that make the work stand out.

Martingale Foresight Sampling
LLM decoding
stochastic process
Doob Decomposition
Optional Stopping Theory
🔎 Similar Papers
No similar papers found.