Reasoning as Compression: Unifying Budget Forcing via the Conditional Information Bottleneck

📅 2026-03-09
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the high computational and token costs of chain-of-thought (CoT) reasoning in large language models, which often arise from redundant intermediate steps despite improved accuracy on complex tasks. The authors formulate efficient reasoning as a lossy compression problem under a conditional information bottleneck (CIB), jointly optimizing task performance and compression of reasoning trajectories via reinforcement learning to retain only response-relevant information not already implicit in the prompt. The proposed CIB framework overcomes the violation of Markov assumptions in standard information bottlenecks caused by Transformer attention mechanisms and unifies existing heuristic budgeting approaches. Furthermore, it replaces coarse token-counting with a semantic prior based on language model surprisal. Experiments demonstrate that the method improves accuracy at moderate compression levels and maintains logical coherence and fluency even under aggressive compression, significantly outperforming token-based baselines.

Technology Category

Application Category

📝 Abstract
Chain-of-Thought (CoT) prompting improves LLM accuracy on complex tasks but often increases token usage and inference cost. Existing"Budget Forcing"methods reducing cost via fine-tuning with heuristic length penalties, suppress both essential reasoning and redundant filler. We recast efficient reasoning as a lossy compression problem under the Information Bottleneck (IB) principle, and identify a key theoretical gap when applying naive IB to transformers: attention violates the Markov property between prompt, reasoning trace, and response. To resolve this issue, we model CoT generation under the Conditional Information Bottleneck (CIB) principle, where the reasoning trace Z acts as a computational bridge that contains only the information about the response Y that is not directly accessible from the prompt X. This yields a general Reinforcement Learning objective: maximize task reward while compressing completions under a prior over reasoning traces, subsuming common heuristics (e.g., length penalties) as special cases (e.g., uniform priors). In contrast to naive token-counting-based approaches, we introduce a semantic prior that measures token cost by surprisal under a language model prior. Empirically, our CIB objective prunes cognitive bloat while preserving fluency and logic, improving accuracy at moderate compression and enabling aggressive compression with minimal accuracy drop.
Problem

Research questions and friction points this paper is trying to address.

Chain-of-Thought
Budget Forcing
Information Bottleneck
Reasoning Compression
Inference Cost
Innovation

Methods, ideas, or system contributions that make the work stand out.

Conditional Information Bottleneck
Chain-of-Thought Compression
Semantic Prior
Budget Forcing
Reinforcement Learning for Reasoning
🔎 Similar Papers
No similar papers found.