🤖 AI Summary
Large language models (LLMs) often identify high-quality solution candidates early in reinforcement learning (RL) post-training but fail to execute complex reasoning due to insufficient capability; meanwhile, policy gradient updates suppress these early exploratory trajectories irreversibly, hindering their reuse even after subsequent capability gains. Method: We propose Retrospective Replay Learning (RRL), a novel framework featuring dynamic experience caching and state-value backtracking evaluation, which mitigates the irreversible suppression of early exploration by policy gradients and enables cross-phase exploration memory and reuse. Contribution/Results: RRL integrates seamlessly into the RLHF pipeline and significantly improves solution success rates on mathematical reasoning and code generation benchmarks. It concurrently enhances model safety and helpfulness while maintaining high exploration efficiency throughout training.
📝 Abstract
Reinforcement learning (RL) has increasingly become a pivotal technique in the post-training of large language models (LLMs). The effective exploration of the output space is essential for the success of RL. We observe that for complex problems, during the early stages of training, the model exhibits strong exploratory capabilities and can identify promising solution ideas. However, its limited capability at this stage prevents it from successfully solving these problems. The early suppression of these potentially valuable solution ideas by the policy gradient hinders the model's ability to revisit and re-explore these ideas later. Consequently, although the LLM's capabilities improve in the later stages of training, it still struggles to effectively address these complex problems. To address this exploration issue, we propose a novel algorithm named Retrospective Replay-based Reinforcement Learning (RRL), which introduces a dynamic replay mechanism throughout the training process. RRL enables the model to revisit promising states identified in the early stages, thereby improving its efficiency and effectiveness in exploration. To evaluate the effectiveness of RRL, we conduct extensive experiments on complex reasoning tasks, including mathematical reasoning and code generation, and general dialogue tasks. The results indicate that RRL maintains high exploration efficiency throughout the training period, significantly enhancing the effectiveness of RL in optimizing LLMs for complicated reasoning tasks. Moreover, it also improves the performance of RLHF, making the model both safer and more helpful.