Stop Unnecessary Reflection: Training LRMs for Efficient Reasoning with Adaptive Reflection and Length Coordinated Penalty

📅 2026-02-12
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the inefficiency of large reasoning models that often generate excessively long reasoning chains due to over-reflection, resulting in high computational costs, increased latency, and degraded accuracy. To mitigate this, the authors propose the ARLCP framework, which leverages reinforcement learning to dynamically balance adaptive reflection penalties with problem-complexity-aware length penalties, thereby guiding the model toward concise yet effective reasoning paths. The approach innovatively integrates reflection behavior analysis, a dynamic penalty mechanism, and complexity estimation. Experiments demonstrate that ARLCP reduces reasoning length by 53.1% while improving accuracy by 5.8% on a 1.5B-parameter model, and achieves a 35.0% length reduction with a 2.7% accuracy gain on a 7B-parameter model, significantly outperforming existing methods.

Technology Category

Application Category

📝 Abstract
Large Reasoning Models (LRMs) have demonstrated remarkable performance on complex reasoning tasks by employing test-time scaling. However, they often generate over-long chains-of-thought that, driven by substantial reflections such as repetitive self-questioning and circular reasoning, lead to high token consumption, substantial computational overhead, and increased latency without improving accuracy, particularly in smaller models. Our observation reveals that increasing problem complexity induces more excessive and unnecessary reflection, which in turn reduces accuracy and increases token overhead. To address this challenge, we propose Adaptive Reflection and Length Coordinated Penalty (ARLCP), a novel reinforcement learning framework designed to dynamically balance reasoning efficiency and solution accuracy. ARLCP introduces two key innovations: (1) a reflection penalty that adaptively curtails unnecessary reflective steps while preserving essential reasoning, and (2) a length penalty calibrated to the estimated complexity of the problem. By coordinating these penalties, ARLCP encourages the model to generate more concise and effective reasoning paths. We evaluate our method on five mathematical reasoning benchmarks using DeepSeek-R1-Distill-Qwen-1.5B and DeepSeek-R1-Distill-Qwen-7B models. Experimental results show that ARLCP achieves a superior efficiency-accuracy trade-off compared to existing approaches. For the 1.5B model, it reduces the average response length by 53.1% while simultaneously improving accuracy by 5.8%. For the 7B model, it achieves a 35.0% reduction in length with a 2.7% accuracy gain. The code is released at https://github.com/ZeweiYu1/ARLCP .
Problem

Research questions and friction points this paper is trying to address.

Large Reasoning Models
unnecessary reflection
reasoning efficiency
token overhead
accuracy trade-off
Innovation

Methods, ideas, or system contributions that make the work stand out.

Adaptive Reflection
Length Penalty
Reinforcement Learning
Reasoning Efficiency
Large Reasoning Models
🔎 Similar Papers
No similar papers found.