🤖 AI Summary
Existing reinforcement learning (RL) approaches treat large language models (LLMs) as monolithic black-box policies, overlooking intrinsic policy differentiation across layers and modules—hindering targeted optimization and mechanistic understanding of reasoning. Method: We first decouple layer-wise and module-wise policies within the Transformer residual stream, revealing distinct entropy evolution patterns during training across models (e.g., Qwen3, LLaMA). Building on this, we propose Bottom-up Policy Optimization (BuPO), a novel paradigm that directly optimizes low-level policies early in training to reconstruct foundational reasoning capabilities. BuPO integrates residual stream decomposition, unembedded matrix projection, policy entropy analysis, and internal policy alignment optimization. Contribution/Results: On complex reasoning benchmarks, BuPO significantly outperforms conventional RL methods. Notably, Qwen3 exhibits a human-like progressive hierarchical reasoning structure, while LLaMA demonstrates rapid convergence in final-layer policies—demonstrating model-specific policy dynamics and enabling more interpretable, effective RL-based LLM refinement.
📝 Abstract
Existing reinforcement learning (RL) approaches treat large language models (LLMs) as a single unified policy, overlooking their internal mechanisms. Understanding how policy evolves across layers and modules is therefore crucial for enabling more targeted optimization and raveling out complex reasoning mechanisms. In this paper, we decompose the language model policy by leveraging the intrinsic split of the Transformer residual stream and the equivalence between the composition of hidden states with the unembedding matrix and the resulting samplable policy. This decomposition reveals Internal Layer Policies, corresponding to contributions from individual layers, and Internal Modular Policies, which align with the self-attention and feed-forward network (FFN) components within each layer. By analyzing the entropy of internal policy, we find that: (a) Early layers keep high entropy for exploration, top layers converge to near-zero entropy for refinement, with convergence patterns varying across model series. (b) LLama's prediction space rapidly converges in the final layer, whereas Qwen-series models, especially Qwen3, exhibit a more human-like, progressively structured reasoning pattern. Motivated by these findings, we propose Bottom-up Policy Optimization (BuPO), a novel RL paradigm that directly optimizes the internal layer policy during early training. By aligning training objective at lower layer, BuPO reconstructs foundational reasoning capabilities and achieves superior performance. Extensive experiments on complex reasoning benchmarks demonstrates the effectiveness of our method. Our code is available at https://github.com/Trae1ounG/BuPO.