Pruning the Unsurprising: Efficient Code Reasoning via First-Token Surprisal

📅 2025-08-07
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Large reasoning models (LRMs) rely on lengthy chain-of-thought (CoT) sequences for code reasoning, incurring substantial training costs, high inference latency, and deployment challenges. Existing compression methods suffer from critical limitations: token-level compression degrades logical coherence, while perplexity-driven step-level compression fails to reliably identify semantically critical reasoning steps. To address these issues, we propose ASAP, the first logic-aware CoT compression framework that leverages *first-token surprisal*—a novel metric quantifying the model’s uncertainty at the onset of each reasoning step—to guide compression. Integrated with anchor-guided pruning, ASAP enables interpretable, hierarchical (coarse-to-fine) compression that preserves both semantic fidelity and logical integrity, enabling self-generated concise CoTs. Evaluated on LiveCodeBench v4_v5, ASAP reduces generated token count by 23.5% and inference latency by 43.5%, achieving a Pass@1 score of 36.19%—significantly outperforming state-of-the-art baselines.

Technology Category

Application Category

📝 Abstract
Recently, Large Reasoning Models (LRMs) have demonstrated remarkable capabilities in code reasoning by scaling up the length of Chain-of-Thought (CoT). However, excessively long reasoning traces introduce substantial challenges in terms of training cost, inference latency, and deployment feasibility. While various CoT compression approaches have emerged to address this challenge, they face inherent trade-offs: token-level methods often disrupt syntactic and logical coherence, while step-level methods based on perplexity fail to reliably capture the logically critical reasoning steps. In this paper, we propose ASAP (Anchor-guided, Surprisal-based Pruning), a novel coarse-to-fine framework for CoT compression. ASAP first performs anchor-guided pruning to preserve the core reasoning structure, which efficiently reduces the search space for subsequent processing. It then enables a logic-aware pruning by selecting logically essential reasoning steps based on a novel first-token surprisal metric. Finally, ASAP teaches models to autonomously generate and leverage these concise CoTs at inference time, enabling efficient reasoning in coding tasks. Experiments show that ASAP achieves state-of-the-art accuracy across multiple code generation benchmarks while substantially reducing training and inference costs. On the challenging LiveCodeBench v4_v5 benchmark, our approach reduces token generation by 23.5% and inference latency by 43.5% compared to the strongest baseline, while achieving a competitive accuracy of 36.19% in Pass@1. Our results highlight a promising direction for building powerful and efficient LRMs.
Problem

Research questions and friction points this paper is trying to address.

Reduce training and inference costs in code reasoning models
Maintain logical coherence in Chain-of-Thought compression
Improve efficiency without sacrificing accuracy in code generation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Anchor-guided pruning preserves core reasoning structure
First-token surprisal metric selects essential reasoning steps
Autonomous concise CoT generation for efficient inference
🔎 Similar Papers
No similar papers found.