🤖 AI Summary
Existing methods employ fixed-length penalties, which fail to adapt to the continually improving reasoning capabilities of large language models (LLMs), thereby compromising the trade-off between accuracy and conciseness. This paper proposes a reinforcement learning–based adaptive length control framework that, for the first time, formulates length constraints as a constrained optimization problem. Leveraging Lagrangian duality, it dynamically adjusts the penalty coefficient in a closed-loop manner—intensifying penalties for overly verbose outputs and relaxing them for insufficiently detailed ones—while integrating reward shaping and constrained reasoning generation. Evaluated across mathematical reasoning, code generation, and instruction following, the method reduces average inference length by 60% without sacrificing task performance. Key contributions are: (1) the first dynamic Lagrangian optimization formulation for length constraints in LLMs; and (2) the establishment of the first adaptive length control paradigm tailored specifically to LLM reasoning.
📝 Abstract
Existing approaches typically rely on fixed length penalties, but such penalties are hard to tune and fail to adapt to the evolving reasoning abilities of LLMs, leading to suboptimal trade-offs between accuracy and conciseness. To address this challenge, we propose Leash (adaptive LEngth penAlty and reward SHaping), a reinforcement learning framework for efficient reasoning in LLMs. We formulate length control as a constrained optimization problem and employ a Lagrangian primal-dual method to dynamically adjust the penalty coefficient. When generations exceed the target length, the penalty is intensified; when they are shorter, it is relaxed. This adaptive mechanism guides models toward producing concise reasoning without sacrificing task performance. Experiments on Deepseek-R1-Distill-Qwen-1.5B and Qwen3-4B-Thinking-2507 show that Leash reduces the average reasoning length by 60% across diverse tasks - including in-distribution mathematical reasoning and out-of-distribution domains such as coding and instruction following - while maintaining competitive performance. Our work thus presents a practical and effective paradigm for developing controllable and efficient LLMs that balance reasoning capabilities with computational budgets.