🤖 AI Summary
Large reasoning models typically employ fixed-depth chain-of-thought (CoT) inference, leading to over-computation on simple tasks and insufficient reasoning on complex ones. To address this, we propose the Hierarchical Budget Policy Optimization (HBPO) framework, which dynamically aligns inference depth with task complexity via reinforcement learning. HBPO introduces a hierarchical budget sampling mechanism, multi-subgroup rollout strategies, and a complexity-aware reward function. Notably, it is the first approach to mitigate exploration space collapse in efficiency-oriented training, thereby eliciting emergent adaptive behavior in the model. Evaluated on four reasoning benchmarks, HBPO achieves an average 60.6% reduction in token consumption while improving accuracy by 3.14%, demonstrating that computational efficiency and reasoning capability can be jointly optimized.
📝 Abstract
Large reasoning models achieve remarkable performance through extensive chain-of-thought generation, yet exhibit significant computational inefficiency by applying uniform reasoning strategies regardless of problem complexity. We present Hierarchical Budget Policy Optimization (HBPO), a reinforcement learning framework that enables models to learn problem-specific reasoning depths without sacrificing capability. HBPO addresses the fundamental challenge of exploration space collapse in efficiency-oriented training, where penalties on long output length systematically bias models away from necessary long reasoning paths. Through hierarchical budget exploration, our approach partitions rollout samples into multiple subgroups with distinct token budgets, aiming to enable efficient resource allocation while preventing degradation of capability. We introduce differentiated reward mechanisms that create budget-aware incentives aligned with the complexity of the problem, allowing models to discover natural correspondences between task requirements and computational effort. Extensive experiments demonstrate that HBPO reduces average token usage by up to 60.6% while improving accuracy by 3.14% across four reasoning benchmarks. Unlike existing methods that impose external constraints or rely on discrete mode selection, HBPO exhibits emergent adaptive behavior where models automatically adjust reasoning depth based on problem complexity. Our results suggest that reasoning efficiency and capability are not inherently conflicting, and can be simultaneously optimized through appropriately structured hierarchical training that preserves exploration diversity.