🤖 AI Summary
This study addresses the challenge of excessive computational overhead in reinforcement learning (RL)-trained language models caused by overly long chain-of-thought outputs, necessitating strategies that optimize response length without compromising reasoning capability. For the first time, length control is systematically analyzed within an RL training framework, evaluating multiple strategies—including length penalties—on Qwen3-1.7B and DeepSeek-R1-Distill-Qwen-1.5B. The work identifies two failure modes: excessively long outputs lead to dispersed and inconsistent results, while overly short outputs result in under-reasoning. Notably, length penalties may inadvertently hinder the acquisition of reasoning abilities; however, for models with strong prior reasoning capabilities, judicious length control significantly enhances reasoning efficiency.
📝 Abstract
Reinforcement learning substantially improves reasoning in large language models, but it also tends to lengthen chain of thought outputs and increase computational cost during both training and inference. Though length control methods have been proposed, it remains unclear what the optimal output length is for balancing efficiency and performance. In this work, we compare several length control methods on two models, Qwen3-1.7B Base and DeepSeek-R1-Distill-Qwen-1.5B. Our results indicate that length penalties may hinder reasoning acquisition, while properly tuned length control can improve efficiency for models with strong prior reasoning. By extending prior work to RL trained policies, we identify two failure modes, 1) long outputs increase dispersion, and 2) short outputs lead to under-thinking.