Bottom-up Policy Optimization: Your Language Model Policy Secretly Contains Internal Policies

📅 2025-12-22
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing reinforcement learning (RL) approaches treat large language models (LLMs) as monolithic black-box policies, overlooking intrinsic policy differentiation across layers and modules—hindering targeted optimization and mechanistic understanding of reasoning. Method: We first decouple layer-wise and module-wise policies within the Transformer residual stream, revealing distinct entropy evolution patterns during training across models (e.g., Qwen3, LLaMA). Building on this, we propose Bottom-up Policy Optimization (BuPO), a novel paradigm that directly optimizes low-level policies early in training to reconstruct foundational reasoning capabilities. BuPO integrates residual stream decomposition, unembedded matrix projection, policy entropy analysis, and internal policy alignment optimization. Contribution/Results: On complex reasoning benchmarks, BuPO significantly outperforms conventional RL methods. Notably, Qwen3 exhibits a human-like progressive hierarchical reasoning structure, while LLaMA demonstrates rapid convergence in final-layer policies—demonstrating model-specific policy dynamics and enabling more interpretable, effective RL-based LLM refinement.

Technology Category

Application Category

📝 Abstract
Existing reinforcement learning (RL) approaches treat large language models (LLMs) as a single unified policy, overlooking their internal mechanisms. Understanding how policy evolves across layers and modules is therefore crucial for enabling more targeted optimization and raveling out complex reasoning mechanisms. In this paper, we decompose the language model policy by leveraging the intrinsic split of the Transformer residual stream and the equivalence between the composition of hidden states with the unembedding matrix and the resulting samplable policy. This decomposition reveals Internal Layer Policies, corresponding to contributions from individual layers, and Internal Modular Policies, which align with the self-attention and feed-forward network (FFN) components within each layer. By analyzing the entropy of internal policy, we find that: (a) Early layers keep high entropy for exploration, top layers converge to near-zero entropy for refinement, with convergence patterns varying across model series. (b) LLama's prediction space rapidly converges in the final layer, whereas Qwen-series models, especially Qwen3, exhibit a more human-like, progressively structured reasoning pattern. Motivated by these findings, we propose Bottom-up Policy Optimization (BuPO), a novel RL paradigm that directly optimizes the internal layer policy during early training. By aligning training objective at lower layer, BuPO reconstructs foundational reasoning capabilities and achieves superior performance. Extensive experiments on complex reasoning benchmarks demonstrates the effectiveness of our method. Our code is available at https://github.com/Trae1ounG/BuPO.
Problem

Research questions and friction points this paper is trying to address.

Decomposes LLM policy into internal layer and modular components
Analyzes entropy patterns across layers to understand reasoning evolution
Proposes bottom-up optimization to reconstruct foundational reasoning capabilities
Innovation

Methods, ideas, or system contributions that make the work stand out.

Decomposes LLM policy via Transformer residual stream
Optimizes internal layer policies in early training
Aligns training objectives at lower layers for reasoning
🔎 Similar Papers
No similar papers found.
Yuqiao Tan
Yuqiao Tan
Institute of Automation, Chinese Academy of Sciences
LLMs ReasoningLLMs Interpretability
Minzheng Wang
Minzheng Wang
Institute of Automation, Chinese Academy of Sciences
Large Language ModelsNatural Language Processing
S
Shizhu He
Institute of Automation, Chinese Academy of Sciences; University of Chinese Academy of Sciences
Huanxuan Liao
Huanxuan Liao
Institute of Automation, Chinese Academy of Sciences
Natural Language ProcessingLarge Language ModelLong Context Modeling
C
Chengfeng Zhao
Institute of Automation, Chinese Academy of Sciences; University of Chinese Academy of Sciences
Q
Qiunan Lu
University of Electronic Science and Technology of China
Tian Liang
Tian Liang
Tencent AI Lab
NLP
J
Jun Zhao
Institute of Automation, Chinese Academy of Sciences; University of Chinese Academy of Sciences
K
Kang Liu
Institute of Automation, Chinese Academy of Sciences; University of Chinese Academy of Sciences