DLLM-Searcher: Adapting Diffusion Large Language Model for Search Agents

πŸ“… 2026-02-03
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
This work addresses the high latency inherent in the ReAct paradigm due to its serial execution and the limited tool-calling and reasoning capabilities of diffusion-based large language models (dLLMs). To overcome these challenges, the authors propose DLLM-Searcher, a novel framework enhanced through a two-stage post-training process: Agentic Supervised Fine-Tuning (SFT) and Variance-Reduced Preference Optimization (VRPO), which collectively improve the model’s retrieval and reasoning proficiency. Furthermore, they introduce Parallel ReAct (P-ReAct), a pioneering paradigm that enables the model to continue generating subsequent reasoning steps while awaiting tool responses. Experimental results demonstrate that DLLM-Searcher achieves performance on par with state-of-the-art LLM-based search agents, while P-ReAct delivers approximately 15% end-to-end inference acceleration.

Technology Category

Application Category

πŸ“ Abstract
Recently, Diffusion Large Language Models (dLLMs) have demonstrated unique efficiency advantages, enabled by their inherently parallel decoding mechanism and flexible generation paradigm. Meanwhile, despite the rapid advancement of Search Agents, their practical deployment is constrained by a fundamental limitation, termed as 1) Latency Challenge: the serial execution of multi-round reasoning, tool calling, and tool response waiting under the ReAct agent paradigm induces severe end-to-end latency. Intuitively, dLLMs can leverage their distinctive strengths to optimize the operational efficiency of agents under the ReAct agent paradigm. Practically, existing dLLM backbones face the 2) Agent Ability Challenge. That is, existing dLLMs exhibit remarkably weak reasoning and tool-calling capabilities, preventing these advantages from being effectively realized in practice. In this paper, we propose DLLM-Searcher, an optimization framework for dLLM-based Search Agents. To solve the Agent Ability Challenge, we design a two-stage post-training pipeline encompassing Agentic Supervised Fine-Tuning (Agentic SFT) and Agentic Variance-Reduced Preference Optimization Agentic VRPO, which enhances the backbone dLLM's information seeking and reasoning capabilities. To mitigate the Latency Challenge, we leverage the flexible generation mechanism of dLLMs and propose a novel agent paradigm termed Parallel-Reasoning and Acting P-ReAct. P-ReAct guides the model to prioritize decoding tool_call instructions, thereby allowing the model to keep thinking while waiting for the tool's return. Experimental results demonstrate that DLLM-Searcher achieves performance comparable to mainstream LLM-based search agents and P-ReAct delivers approximately 15% inference acceleration. Our code is available at https://anonymous.4open.science/r/DLLM-Searcher-553C
Problem

Research questions and friction points this paper is trying to address.

Diffusion Large Language Model
Search Agents
Latency Challenge
Agent Ability Challenge
ReAct paradigm
Innovation

Methods, ideas, or system contributions that make the work stand out.

Diffusion Large Language Model
Search Agent
Parallel Reasoning
Tool Calling
Preference Optimization
πŸ”Ž Similar Papers