🤖 AI Summary
To address error accumulation and overthinking in large language models (LLMs) caused by excessively lengthy chain-of-thought (CoT) reasoning steps, this paper proposes a plug-and-play, fine-grained reasoning optimization framework. The method introduces, for the first time, **self-generated stepwise preference signals**, enabling reinforcement learning–based stepwise supervision and compression of reasoning paths—without auxiliary models or human annotations. Its core contributions are: (1) dynamic generation of step-level preference signals directly from the model’s own reasoning trace; (2) joint optimization of answer accuracy and reasoning path conciseness; and (3) end-to-end CoT compression and self-correction. Experiments demonstrate substantial reductions in reasoning length and consistent improvements in answer accuracy across diverse domains and multilingual benchmarks, confirming strong robustness and generalization.
📝 Abstract
Test-time scaling has proven effective in further enhancing the performance of pretrained Large Language Models (LLMs). However, mainstream post-training methods (i.e., reinforcement learning (RL) with chain-of-thought (CoT) reasoning) often incur substantial computational overhead due to auxiliary models and overthinking. In this paper, we empirically reveal that the incorrect answers partially stem from verbose reasoning processes lacking correct self-fix, where errors accumulate across multiple reasoning steps. To this end, we propose Self-traced Step-wise Preference Optimization (SSPO), a pluggable RL process supervision framework that enables fine-grained optimization of each reasoning step. Specifically, SSPO requires neither auxiliary models nor stepwise manual annotations. Instead, it leverages step-wise preference signals generated by the model itself to guide the optimization process for reasoning compression. Experiments demonstrate that the generated reasoning sequences from SSPO are both accurate and succinct, effectively mitigating overthinking behaviors without compromising model performance across diverse domains and languages.