Beyond Alignment: Expanding Reasoning Capacity via Manifold-Reshaping Policy Optimization

📅 2026-01-30
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work proposes Manifold Reshaping Policy Optimization (MRPO), a novel framework that transcends the limitations of conventional reinforcement learning approaches, which are often confined to aligning with the pretraining capabilities of large language models within their low-rank bias manifolds. MRPO actively expands the model’s reasoning space through geometric intervention: it first employs Spectral Orthogonal Exploration (SOE) to initialize policies within the null space of the bias manifold and then maintains high-dimensional reasoning trajectories via effective rank regularization. This approach challenges the long-standing “accessibility boundary” assumption by leveraging geometric principles—a departure from traditional RL paradigms. Evaluated on a 4B-parameter model, MRPO achieves state-of-the-art performance on mathematical reasoning tasks, significantly outperforming substantially larger models such as Qwen3-32B and surpassing the capability ceiling of standard GRPO.

Technology Category

Application Category

📝 Abstract
Reinforcement Learning with Verifiable Rewards (RLVR) has demonstrated remarkable success in enhancing the reasoning capabilities of Large Language Models (LLMs). However, recent studies question whether RL genuinely expands reasoning capacity or merely aligns existing latent capabilities, arguing that exploration remains confined within the pre-trained model's low-rank bias manifold. In this work, we challenge this accessibility boundary hypothesis by demonstrating that the latent reasoning space can be fundamentally expanded through targeted geometric interventions. We propose Manifold-Reshaping Policy Optimization (MRPO), a geometric framework designed to fundamentally restructure the inference space of LLMs. MRPO operates in two stages: first, we employ Spectral Orthogonal Exploration (SOE) to eject the policy initialization into the null space of the bias manifold; second, we integrate an Effective Rank regularization term into the policy optimization objective. This approach incentivizes the discovery and maintenance of high-dimensional reasoning trajectories against the entropy-reducing tendency of standard RL. Empirically, our 4B-parameter method achieves state-of-the-art performance on mathematical tasks, significantly outperforming larger models (e.g., Qwen3-32B) and expanding the capability boundary beyond standard GRPO. Our code is available at https://anonymous.4open.science/r/MRPO-D57B/
Problem

Research questions and friction points this paper is trying to address.

reasoning capacity
manifold constraint
reinforcement learning
latent space expansion
low-rank bias
Innovation

Methods, ideas, or system contributions that make the work stand out.

Manifold-Reshaping Policy Optimization
Spectral Orthogonal Exploration
Effective Rank regularization
reasoning capacity expansion
geometric intervention