Towards Effective Experiential Learning: Dual Guidance for Utilization and Internalization

πŸ“… 2026-03-25
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
This work addresses the challenge that existing reinforcement learning approaches for language models struggle to effectively integrate the dual mechanisms of leveraging external experience and internalizing knowledge, both central to human learning. To bridge this gap, we propose the Dual-Guidance Optimization (DGO) framework, which, for the first time, incorporates these mechanisms into reward-verifiable reinforcement learning (RLVR). DGO establishes a closed-loop optimization process by jointly guiding policy exploration through an external experience bank and the model’s internal knowledge. The framework synergistically combines experience retrieval, policy gradient optimization, and trajectory reuse. Extensive experiments across multiple reasoning tasks demonstrate that DGO significantly outperforms current baselines, highlighting the critical role of efficient experience internalization in enhancing the reasoning capabilities of language models.

Technology Category

Application Category

πŸ“ Abstract
Recently, reinforcement learning~(RL) has become an important approach for improving the capabilities of large language models~(LLMs). In particular, reinforcement learning from verifiable rewards~(RLVR) has emerged as a promising paradigm for reasoning tasks. However, existing RL-based training still remains only a rough approximation to human learning. Human learners leverage both external and internal experience to guide exploration and gradually internalize useful trajectories into stable knowledge. Motivated by this gap, we ask: how can LLMs better utilize and internalize experience during RLVR training? To answer this question, we propose \textbf{D}ual \textbf{G}uidance \textbf{O}ptimization~(\textbf{DGO}), a unified framework that leverages \emph{external} and \emph{internal experience} to improve training effectiveness. Specifically, DGO first constructs an experience bank from previously explored trajectories. The policy then performs exploration under the joint guidance of the experience bank and the model's internal knowledge. The resulting trajectories are further used to refine the experience bank and optimize model parameters, forming a closed loop of experience utilization and internalization. Experiments show that DGO consistently outperforms baseline methods, suggesting that better utilization and internalization of experience lead to more effective reasoning.
Problem

Research questions and friction points this paper is trying to address.

reinforcement learning
large language models
experiential learning
experience internalization
reasoning tasks
Innovation

Methods, ideas, or system contributions that make the work stand out.

Dual Guidance Optimization
Experience Bank
Reinforcement Learning from Verifiable Rewards
Experience Internalization
Large Language Models
πŸ”Ž Similar Papers
No similar papers found.
F
Fei Bai
Gaoling School of Artificial Intelligence, Renmin University of China
Zhipeng Chen
Zhipeng Chen
Ph.D student, GSAI, Renmin University of China
Natural Language ProcessingPre-trained Language ModelLarge Language Model
C
Chuan Hao
IQuest Research
M
Ming Yang
IQuest Research
R
Ran Tao
IQuest Research
B
Bryan Dai
IQuest Research
Wayne Xin Zhao
Wayne Xin Zhao
Professor, Renmin University of China
Recommender SystemNatural Language ProcessingLarge Language Model
J
Jian Yang
Beihang University
H
Hongteng Xu
Gaoling School of Artificial Intelligence, Renmin University of China