D$^2$Plan: Dual-Agent Dynamic Global Planning for Complex Retrieval-Augmented Reasoning

📅 2026-01-13
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge of fragmented and interfered reasoning chains in retrieval-augmented large language models during multi-hop inference, caused by irrelevant context. To mitigate this, the authors propose D²Plan, a dual-agent dynamic global planning paradigm comprising a Reasoner that constructs and iteratively refines a global reasoning plan, and a Purifier that evaluates retrieval relevance and distills critical evidence. Their synergistic interaction enables plan-guided, efficient inference. The framework is trained via a two-stage process: supervised fine-tuning with synthetically generated cold-start trajectories followed by plan-oriented reinforcement learning. By integrating explicit plan generation with an information purification mechanism, D²Plan substantially enhances robustness to noise and coherence in multi-step reasoning. It achieves state-of-the-art performance across multiple challenging question-answering benchmarks, demonstrating superior multi-hop reasoning capability and resilience to interference.

Technology Category

Application Category

📝 Abstract
Recent search-augmented LLMs trained with reinforcement learning (RL) can interleave searching and reasoning for multi-hop reasoning tasks. However, they face two critical failure modes as the accumulating context becomes flooded with both crucial evidence and irrelevant information: (1) ineffective search chain construction that produces incorrect queries or omits retrieval of critical information, and (2) reasoning hijacking by peripheral evidence that causes models to misidentify distractors as valid evidence. To address these challenges, we propose **D$^2$Plan**, a **D**ual-agent **D**ynamic global **Plan**ning paradigm for complex retrieval-augmented reasoning. **D$^2$Plan** operates through the collaboration of a *Reasoner* and a *Purifier*: the *Reasoner* constructs explicit global plans during reasoning and dynamically adapts them based on retrieval feedback; the *Purifier* assesses retrieval relevance and condenses key information for the *Reasoner*. We further introduce a two-stage training framework consisting of supervised fine-tuning (SFT) cold-start on synthesized trajectories and RL with plan-oriented rewards to teach LLMs to master the **D$^2$Plan** paradigm. Extensive experiments demonstrate that **D$^2$Plan** enables more coherent multi-step reasoning and stronger resilience to irrelevant information, thereby achieving superior performance on challenging QA benchmarks.
Problem

Research questions and friction points this paper is trying to address.

retrieval-augmented reasoning
multi-hop reasoning
search chain construction
reasoning hijacking
irrelevant information
Innovation

Methods, ideas, or system contributions that make the work stand out.

Dual-Agent Planning
Retrieval-Augmented Reasoning
Dynamic Global Planning
Reinforcement Learning
Information Purification
🔎 Similar Papers
No similar papers found.