Just Enough Thinking: Efficient Reasoning with Adaptive Length Penalties Reinforcement Learning

📅 2025-06-05
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Large reasoning models (LRMs) frequently generate redundant tokens on simple problems, wasting computational resources; existing length-control methods—such as supervised fine-tuning or fixed-penalty reinforcement learning—fail to adapt to varying problem difficulty. This work proposes Adaptive Length Penalty (ALP), a mechanism that dynamically adjusts the per-token generation cost based on the online solving rate estimated for each prompt, enabling difficulty-aware length control. ALP integrates differentiable reinforcement learning, rolling online evaluation, and post-training fine-tuning (using DeepScaleR-1.5B). Experiments demonstrate that ALP reduces average token consumption by 50% without statistically significant accuracy degradation overall—and even improves accuracy on the most challenging problems. To our knowledge, this is the first approach achieving fine-grained, data-agnostic, difficulty-adaptive generation length control specifically designed for LRMs.

Technology Category

Application Category

📝 Abstract
Large reasoning models (LRMs) achieve higher performance on challenging reasoning tasks by generating more tokens at inference time, but this verbosity often wastes computation on easy problems. Existing solutions, including supervised finetuning on shorter traces, user-controlled budgets, or RL with uniform penalties, either require data curation, manual configuration, or treat all problems alike regardless of difficulty. We introduce Adaptive Length Penalty (ALP), a reinforcement learning objective tailoring generation length to per-prompt solve rate. During training, ALP monitors each prompt's online solve rate through multiple rollouts and adds a differentiable penalty whose magnitude scales inversely with that rate, so confident (easy) prompts incur a high cost for extra tokens while hard prompts remain unhindered. Posttraining DeepScaleR-1.5B with ALP cuts average token usage by 50% without significantly dropping performance. Relative to fixed-budget and uniform penalty baselines, ALP redistributes its reduced budget more intelligently by cutting compute on easy prompts and reallocating saved tokens to difficult ones, delivering higher accuracy on the hardest problems with higher cost.
Problem

Research questions and friction points this paper is trying to address.

Optimizes token usage in large reasoning models efficiently
Tailors generation length to per-prompt difficulty adaptively
Redistributes compute budget from easy to hard prompts
Innovation

Methods, ideas, or system contributions that make the work stand out.

Adaptive Length Penalty for efficient reasoning
Differentiable penalty scales with solve rate
Redistributes tokens from easy to hard prompts
🔎 Similar Papers
No similar papers found.