Hierarchical Budget Policy Optimization for Adaptive Reasoning

📅 2025-07-21
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Large reasoning models typically employ fixed-depth chain-of-thought (CoT) inference, leading to over-computation on simple tasks and insufficient reasoning on complex ones. To address this, we propose the Hierarchical Budget Policy Optimization (HBPO) framework, which dynamically aligns inference depth with task complexity via reinforcement learning. HBPO introduces a hierarchical budget sampling mechanism, multi-subgroup rollout strategies, and a complexity-aware reward function. Notably, it is the first approach to mitigate exploration space collapse in efficiency-oriented training, thereby eliciting emergent adaptive behavior in the model. Evaluated on four reasoning benchmarks, HBPO achieves an average 60.6% reduction in token consumption while improving accuracy by 3.14%, demonstrating that computational efficiency and reasoning capability can be jointly optimized.

Technology Category

Application Category

📝 Abstract
Large reasoning models achieve remarkable performance through extensive chain-of-thought generation, yet exhibit significant computational inefficiency by applying uniform reasoning strategies regardless of problem complexity. We present Hierarchical Budget Policy Optimization (HBPO), a reinforcement learning framework that enables models to learn problem-specific reasoning depths without sacrificing capability. HBPO addresses the fundamental challenge of exploration space collapse in efficiency-oriented training, where penalties on long output length systematically bias models away from necessary long reasoning paths. Through hierarchical budget exploration, our approach partitions rollout samples into multiple subgroups with distinct token budgets, aiming to enable efficient resource allocation while preventing degradation of capability. We introduce differentiated reward mechanisms that create budget-aware incentives aligned with the complexity of the problem, allowing models to discover natural correspondences between task requirements and computational effort. Extensive experiments demonstrate that HBPO reduces average token usage by up to 60.6% while improving accuracy by 3.14% across four reasoning benchmarks. Unlike existing methods that impose external constraints or rely on discrete mode selection, HBPO exhibits emergent adaptive behavior where models automatically adjust reasoning depth based on problem complexity. Our results suggest that reasoning efficiency and capability are not inherently conflicting, and can be simultaneously optimized through appropriately structured hierarchical training that preserves exploration diversity.
Problem

Research questions and friction points this paper is trying to address.

Optimizes reasoning depth adaptively for computational efficiency
Prevents exploration collapse in efficiency-oriented model training
Balances token usage and accuracy in reasoning tasks
Innovation

Methods, ideas, or system contributions that make the work stand out.

Reinforcement learning optimizes problem-specific reasoning depth
Hierarchical budget exploration prevents capability degradation
Differentiated rewards align effort with problem complexity
🔎 Similar Papers
No similar papers found.
Shangke Lyu
Shangke Lyu
Westlake University
Robot controlLearning controlHuman-robot Interaction
L
Linjuan Wu
Zhejiang University
Y
Yuchen Yan
Zhejiang University
Xingyu Wu
Xingyu Wu
Hong Kong Polytechnic University
Automated machine learningCausality-based machine learningLarge foundation modelAutoML
H
Hao Li
SF Technology
Y
Yongliang Shen
Zhejiang University
P
Peisheng Jiang
SF Technology
Weiming Lu
Weiming Lu
Zhejiang University
Natural Language ProcessingLarge Language ModelsAGI
J
Jun Xiao
Zhejiang University
Y
Yueting Zhuang
Zhejiang University