🤖 AI Summary
This work addresses the exploration collapse in large language models during reinforcement learning, which arises from semantic homogenization of reasoning trajectories. To mitigate this, the authors propose a novel exploration mechanism grounded in an iterative information bottleneck framework. By triggering topological bifurcations of latent reasoning paths under high-entropy states, the method elevates exploration from token-level perturbations to structural modifications of entire trajectories. Leveraging the information bottleneck principle, it filters out uninformative trajectories and generates self-supervised rewards. This study is the first to integrate information bottleneck theory into trajectory guidance and self-reward generation for reasoning, moving beyond conventional entropy regularization paradigms. The approach achieves state-of-the-art performance on four mathematical reasoning benchmarks, yielding up to a 5.3% improvement in accuracy and a 7.4% gain in response diversity.
📝 Abstract
Recent advances in Reinforcement Learning with Verifiable Rewards (RLVR) for Large Language Model (LLM) reasoning have been hindered by a persistent challenge: exploration collapse. The semantic homogeneity of random rollouts often traps models in narrow, over-optimized behaviors. While existing methods leverage policy entropy to encourage exploration, they face inherent limitations. Global entropy regularization is susceptible to reward hacking, which can induce meaningless verbosity, whereas local token-selective updates struggle with the strong inductive bias of pre-trained models. To address this, we propose Latent Policy Optimization via Iterative Information Bottleneck (IIB-LPO), a novel approach that shifts exploration from statistical perturbation of token distributions to topological branching of reasoning trajectories. IIB-LPO triggers latent branching at high-entropy states to diversify reasoning paths and employs the Information Bottleneck principle both as a trajectory filter and a self-reward mechanism, ensuring concise and informative exploration. Empirical results across four mathematical reasoning benchmarks demonstrate that IIB-LPO achieves state-of-the-art performance, surpassing prior methods by margins of up to 5.3% in accuracy and 7.4% in diversity metrics.