The Markovian Thinker

📅 2025-10-07
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Reinforcement learning (RL) training of large language models with long-chain-of-thought (LongCoT) reasoning faces two fundamental bottlenecks: unbounded state spaces and quadratic attention complexity—O(L²)—with respect to reasoning length L. To address this, we propose the Markovian Thinking paradigm, which structures reasoning into fixed-length blocks within a novel Delethink environment; at each block’s end, a compact, continuation-ready state is generated, thereby decoupling reasoning depth from context window size. This reduces computational complexity to linear—O(L)—and enables arbitrarily long reasoning chains. Leveraging this framework, we train R1-Distill (1.5B), which achieves 24K-token reasoning within an 8K-context window—matching full-context baselines in performance. At an average inference length of 96K tokens, its training cost is only 25% of that of baseline methods, while demonstrating substantially improved inference scalability.

Technology Category

Application Category

📝 Abstract
Reinforcement learning (RL) has recently become a strong recipe for training reasoning LLMs that produce long chains of thought (LongCoT). Yet the standard RL "thinking environment", where the state is the prompt plus all prior reasoning tokens, makes the state unbounded and forces attention-based policies to pay quadratic compute as thoughts lengthen. We revisit the environment itself. We propose Markovian Thinking, a paradigm in which the policy advances reasoning while conditioning on a constant-size state, decoupling thinking length from context size. As an immediate consequence this yields linear compute with constant memory. We instantiate this idea with Delethink, an RL environment that structures reasoning into fixed-size chunks. Within each chunk, the model thinks as usual; at the boundary, the environment resets the context and reinitializes the prompt with a short carryover. Through RL, the policy learns to write a textual state near the end of each chunk sufficient for seamless continuation of reasoning after reset. Trained in this environment, an R1-Distill 1.5B model reasons in 8K-token chunks yet thinks up to 24K tokens, matching or surpassing LongCoT-RL trained with a 24K budget. With test-time scaling, Delethink continues to improve where LongCoT plateaus. The effect of linear compute is substantial: we empirically estimate at 96K average thinking length LongCoT-RL costs 27 H100-months vs. 7 for Delethink. Analysis at RL initialization shows off-the-shelf reasoning models (1.5B-120B) often sample Markovian traces zero-shot across diverse benchmarks, providing positive samples that make RL effective at scale. Our results show that redesigning the thinking environment is a powerful lever: it enables very long reasoning without quadratic overhead and opens a path toward efficient, scalable reasoning LLMs.
Problem

Research questions and friction points this paper is trying to address.

Reduces quadratic computational cost in long reasoning chains
Enables linear compute scaling with constant memory usage
Decouples thinking length from context size limitations
Innovation

Methods, ideas, or system contributions that make the work stand out.

Markovian Thinking uses constant-size state for reasoning
Delethink structures reasoning into fixed-size chunks
Linear compute achieved with constant memory usage
🔎 Similar Papers
No similar papers found.