🤖 AI Summary
Chain-of-thought (CoT) reasoning in large language models (LLMs) is often verbose and inefficient, leading to excessive context consumption, increased latency, and higher energy costs. To address this, we propose Activation-guided Steering Compression (ASC), a training-free inference-time method that extracts task-specific steering vectors from the residual stream’s activation space and applies KL-divergence–constrained guidance to elicit more concise, mathematically oriented intermediate reasoning steps. Theoretically, ASC preserves output distribution stability, ensuring accuracy retention. Evaluated across multiple LLMs, ASC achieves up to 67.43% CoT length compression using only 100 calibration samples, yielding an average 2.73× end-to-end inference speedup. To our knowledge, this is the first work to leverage activation-space steering for training-free CoT compression—simultaneously optimizing efficiency, accuracy, and cross-model generalizability.
📝 Abstract
Large language models (LLMs) excel at complex reasoning when they include intermediate steps, known as "chains of thought" (CoTs). However, these rationales are often overly verbose, even for simple problems, leading to wasted context, increased latency, and higher energy consumption. We observe that verbose, English-heavy CoTs and concise, math-centric CoTs occupy distinct regions in the model's residual-stream activation space. By extracting and injecting a "steering vector" to transition between these modes, we can reliably shift generation toward more concise reasoning, effectively compressing CoTs without retraining. We formalize this approach as Activation-Steered Compression (ASC), an inference-time technique that shortens reasoning traces by directly modifying hidden representations. In addition, we provide a theoretical analysis of the impact of ASC on the output distribution, derived from a closed-form KL-divergence-bounded constraint to regulate steering strength. Using only 100 paired verbose and concise examples, ASC achieves up to 67.43% reduction in CoT length on MATH500 and GSM8K datasets, while maintaining accuracy across 7B, 8B, and 32B parameter models. As a training-free method, ASC introduces negligible runtime overhead and, on MATH500, delivers an average 2.73x speedup in end-to-end reasoning wall-clock time on an 8B model. This makes ASC a practical and efficient tool for streamlining the deployment of reasoning-capable LLMs in latency- or cost-sensitive settings. The code is available at: https://github.com/ArminAzizi98/ASC