Offline Reinforcement Learning for LLM Multi-Step Reasoning

📅 2024-12-20
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the limited multi-step reasoning performance of large language models (LLMs) under sparse-reward settings and absence of paired preference data, this paper proposes OREO—the first framework to integrate maximum-entropy offline reinforcement learning (RL) into LLM-based multi-step reasoning. OREO jointly models policy and value functions, optimizing credit assignment via the soft Bellman equation without requiring human-annotated preference pairs. Theoretically, it reduces data dependency and enhances policy robustness. Moreover, it enables training-free, value-guided tree search at inference time, yielding zero-cost reasoning enhancement. Empirically, OREO significantly outperforms state-of-the-art offline methods—including DPO—on GSM8K, MATH, and ALFWorld. It further supports iterative multi-round refinement, demonstrating strong scalability for complex reasoning tasks.

Technology Category

Application Category

📝 Abstract
Improving the multi-step reasoning ability of large language models (LLMs) with offline reinforcement learning (RL) is essential for quickly adapting them to complex tasks. While Direct Preference Optimization (DPO) has shown promise in aligning LLMs with human preferences, it is less suitable for multi-step reasoning tasks because (1) DPO relies on paired preference data, which is not readily available for multi-step reasoning tasks, and (2) it treats all tokens uniformly, making it ineffective for credit assignment in multi-step reasoning tasks, which often come with sparse reward. In this work, we propose OREO (Offline Reasoning Optimization), an offline RL method for enhancing LLM multi-step reasoning. Building on insights from previous works of maximum entropy reinforcement learning, it jointly learns a policy model and value function by optimizing the soft Bellman Equation. We show in principle that it reduces the need to collect pairwise data and enables better credit assignment. Empirically, OREO surpasses existing offline learning methods on multi-step reasoning benchmarks, including mathematical reasoning tasks (GSM8K, MATH) and embodied agent control (ALFWorld). The approach can be extended to a multi-iteration framework when additional resources are available. Furthermore, the learned value function can be leveraged to guide the tree search for free, which can further boost performance during test time.
Problem

Research questions and friction points this paper is trying to address.

Offline Reinforcement Learning
Large Language Models
Multi-step Problem Solving
Innovation

Methods, ideas, or system contributions that make the work stand out.

Offline Reinforcement Learning
Large Language Models
Multi-step Reasoning
🔎 Similar Papers
No similar papers found.