On the Optimal Reasoning Length for RL-Trained Language Models

📅 2026-02-10
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses the challenge of excessive computational overhead in reinforcement learning (RL)-trained language models caused by overly long chain-of-thought outputs, necessitating strategies that optimize response length without compromising reasoning capability. For the first time, length control is systematically analyzed within an RL training framework, evaluating multiple strategies—including length penalties—on Qwen3-1.7B and DeepSeek-R1-Distill-Qwen-1.5B. The work identifies two failure modes: excessively long outputs lead to dispersed and inconsistent results, while overly short outputs result in under-reasoning. Notably, length penalties may inadvertently hinder the acquisition of reasoning abilities; however, for models with strong prior reasoning capabilities, judicious length control significantly enhances reasoning efficiency.

Technology Category

Application Category

📝 Abstract
Reinforcement learning substantially improves reasoning in large language models, but it also tends to lengthen chain of thought outputs and increase computational cost during both training and inference. Though length control methods have been proposed, it remains unclear what the optimal output length is for balancing efficiency and performance. In this work, we compare several length control methods on two models, Qwen3-1.7B Base and DeepSeek-R1-Distill-Qwen-1.5B. Our results indicate that length penalties may hinder reasoning acquisition, while properly tuned length control can improve efficiency for models with strong prior reasoning. By extending prior work to RL trained policies, we identify two failure modes, 1) long outputs increase dispersion, and 2) short outputs lead to under-thinking.
Problem

Research questions and friction points this paper is trying to address.

reasoning length
reinforcement learning
large language models
efficiency-performance trade-off
chain of thought
Innovation

Methods, ideas, or system contributions that make the work stand out.

reinforcement learning
reasoning length optimization
chain-of-thought
length control
efficiency-performance tradeoff
🔎 Similar Papers
No similar papers found.