🤖 AI Summary
To address the high computational cost and poor deployment efficiency of large language models (LLMs) in higher-order reasoning tasks, this paper proposes a reinforcement learning–based dynamic compute allocation method using Proximal Policy Optimization (PPO), enabling models to generate concise and accurate chain-of-thought (CoT) reasoning paths adaptive to task complexity. Our key contribution is a tunable efficiency–accuracy trade-off mechanism controlled by a single hyperparameter, supporting multiple inference efficiency tiers while preserving accuracy and flexibly meeting low-latency or low-energy constraints. The approach integrates task-aware reward modeling with explicit computational cost penalties and is compatible with mainstream open-weight LLM architectures for reasoning. Experiments on two open-weight large reasoning models demonstrate an average 42% reduction in inference tokens, a 38% decrease in latency, and maintained accuracy at over 96% of the original baseline.
📝 Abstract
Scaling model size and training data has led to great advances in the performance of Large Language Models (LLMs). However, the diminishing returns of this approach necessitate alternative methods to improve model capabilities, particularly in tasks requiring advanced reasoning. Large reasoning models, which leverage long chain-of-thoughts, bring unprecedented breakthroughs in problem-solving capabilities but at a substantial deployment cost associated to longer generations. Reducing inference costs is crucial for the economic feasibility, user experience, and environmental sustainability of these models. In this work, we propose to train large reasoning models to reason efficiently. More precisely, we use reinforcement learning (RL) to train reasoning models to dynamically allocate inference-time compute based on task complexity. Our method incentivizes models to minimize unnecessary computational overhead while maintaining accuracy, thereby achieving substantial efficiency gains. It enables the derivation of a family of reasoning models with varying efficiency levels, controlled via a single hyperparameter. Experiments on two open-weight large reasoning models demonstrate significant reductions in inference cost while preserving most of the accuracy.