SAPO: Self-Adaptive Process Optimization Makes Small Reasoners Stronger

📅 2026-01-28
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the limitations of existing self-evolution methods for small language models, which suffer from a significant performance gap between reasoners and verifiers due to their neglect of fine-grained reasoning steps and reliance on inefficient Monte Carlo process supervision. To overcome these issues, we propose SAPO (Self-Adaptive Process Optimization), the first approach to incorporate the error-related negativity (ERN) mechanism from cognitive neuroscience into the self-evolution pipeline. SAPO enables dynamic and efficient supervision of reasoning trajectories without Monte Carlo estimation, substantially narrowing the performance gap between reasoning and verification. Our method outperforms current self-evolution approaches on both mathematical and code generation tasks. Furthermore, we introduce the first process-level reward modeling benchmark tailored to these domains, advancing the evaluation of fine-grained reasoning capabilities.

Technology Category

Application Category

📝 Abstract
Existing self-evolution methods overlook the influence of fine-grained reasoning steps, which leads to the reasoner-verifier gap. The computational inefficiency of Monte Carlo (MC) process supervision further exacerbates the difficulty in mitigating the gap. Motivated by the Error-Related Negativity (ERN), which the reasoner can localize error following incorrect decisions, guiding rapid adjustments, we propose a Self-Adaptive Process Optimization (SAPO) method for self-improvement in Small Language Models (SLMs). SAPO adaptively and efficiently introduces process supervision signals by actively minimizing the reasoner-verifier gap rather than relying on inefficient MC estimations. Extensive experiments demonstrate that the proposed method outperforms most existing self-evolution methods on two challenging task types: mathematics and code. Additionally, to further investigate SAPO's impact on verifier performance, this work introduces two new benchmarks for process reward models in both mathematical and coding tasks.
Problem

Research questions and friction points this paper is trying to address.

reasoner-verifier gap
process supervision
self-evolution
fine-grained reasoning
computational inefficiency
Innovation

Methods, ideas, or system contributions that make the work stand out.

Self-Adaptive Process Optimization
Reasoner-Verifier Gap
Process Supervision
Small Language Models
Error-Related Negativity
🔎 Similar Papers
No similar papers found.
Kaiyuan Chen
Kaiyuan Chen
Bytedance
LLMScaling LawAI4WeatherVideo Generation
G
Guangmin Zheng
School of Information Science and Engineering, Yunnan University, Kunming, China
Jin Wang
Jin Wang
Yunnan University
Sentiment AnalysisNatural Language Processing
X
Xiaobing Zhou
School of Information Science and Engineering, Yunnan University, Kunming, China
X
Xuejie Zhang
School of Information Science and Engineering, Yunnan University, Kunming, China