π€ AI Summary
This work addresses the challenge that existing reinforcement learning approaches for language models struggle to effectively integrate the dual mechanisms of leveraging external experience and internalizing knowledge, both central to human learning. To bridge this gap, we propose the Dual-Guidance Optimization (DGO) framework, which, for the first time, incorporates these mechanisms into reward-verifiable reinforcement learning (RLVR). DGO establishes a closed-loop optimization process by jointly guiding policy exploration through an external experience bank and the modelβs internal knowledge. The framework synergistically combines experience retrieval, policy gradient optimization, and trajectory reuse. Extensive experiments across multiple reasoning tasks demonstrate that DGO significantly outperforms current baselines, highlighting the critical role of efficient experience internalization in enhancing the reasoning capabilities of language models.
π Abstract
Recently, reinforcement learning~(RL) has become an important approach for improving the capabilities of large language models~(LLMs). In particular, reinforcement learning from verifiable rewards~(RLVR) has emerged as a promising paradigm for reasoning tasks. However, existing RL-based training still remains only a rough approximation to human learning. Human learners leverage both external and internal experience to guide exploration and gradually internalize useful trajectories into stable knowledge. Motivated by this gap, we ask: how can LLMs better utilize and internalize experience during RLVR training? To answer this question, we propose \textbf{D}ual \textbf{G}uidance \textbf{O}ptimization~(\textbf{DGO}), a unified framework that leverages \emph{external} and \emph{internal experience} to improve training effectiveness. Specifically, DGO first constructs an experience bank from previously explored trajectories. The policy then performs exploration under the joint guidance of the experience bank and the model's internal knowledge. The resulting trajectories are further used to refine the experience bank and optimize model parameters, forming a closed loop of experience utilization and internalization. Experiments show that DGO consistently outperforms baseline methods, suggesting that better utilization and internalization of experience lead to more effective reasoning.