DiffuGR: Generative Document Retrieval with Diffusion Language Models

📅 2025-11-11
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing generative retrieval (GR) approaches rely on autoregressive models to generate document IDs token-by-token, suffering from error propagation and an inherent trade-off between efficiency and accuracy. Method: We propose the first GR framework based on discrete diffusion language models, reformulating DocID generation as a parallel denoising process. Our approach employs stochastic masking during training and optimizes a retrieval-aware objective function; at inference time, it supports configurable multi-step denoising, enabling flexible quality–latency trade-offs. Contribution/Results: On standard benchmarks, our method matches the performance of strong autoregressive baselines while overcoming the fundamental limitations of unidirectional generation. It demonstrates, for the first time, the effectiveness and practicality of non-autoregressive diffusion models for end-to-end retrieval—offering improved robustness, parallelization, and controllable inference speed without sacrificing retrieval accuracy.

Technology Category

Application Category

📝 Abstract
Generative retrieval (GR) re-frames document retrieval as a sequence-based document identifier (DocID) generation task, memorizing documents with model parameters and enabling end-to-end retrieval without explicit indexing. Existing GR methods are based on auto-regressive generative models, i.e., the token generation is performed from left to right. However, such auto-regressive methods suffer from: (1) mismatch between DocID generation and natural language generation, e.g., an incorrect DocID token generated in early left steps would lead to totally erroneous retrieval; and (2) failure to balance the trade-off between retrieval efficiency and accuracy dynamically, which is crucial for practical applications. To address these limitations, we propose generative document retrieval with diffusion language models, dubbed DiffuGR. It models DocID generation as a discrete diffusion process: during training, DocIDs are corrupted through a stochastic masking process, and a diffusion language model is learned to recover them under a retrieval-aware objective. For inference, DiffuGR attempts to generate DocID tokens in parallel and refines them through a controllable number of denoising steps. In contrast to conventional left-to-right auto-regressive decoding, DiffuGR provides a novel mechanism to first generate more confident DocID tokens and refine the generation through diffusion-based denoising. Moreover, DiffuGR also offers explicit runtime control over the qualitylatency tradeoff. Extensive experiments on benchmark retrieval datasets show that DiffuGR is competitive with strong auto-regressive generative retrievers, while offering flexible speed and accuracy tradeoffs through variable denoising budgets. Overall, our results indicate that non-autoregressive diffusion models are a practical and effective alternative for generative document retrieval.
Problem

Research questions and friction points this paper is trying to address.

Addressing auto-regressive DocID generation mismatch and error propagation
Balancing retrieval efficiency and accuracy with controllable denoising steps
Providing explicit runtime control over quality-latency tradeoff in retrieval
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses diffusion language models for document retrieval
Models DocID generation as discrete diffusion process
Enables controllable denoising steps for quality-latency tradeoff
🔎 Similar Papers
No similar papers found.
X
Xinpeng Zhao
Shandong University, Shandong, China
Yukun Zhao
Yukun Zhao
Baidu Inc.
Natural Language Process
Z
Zhenyang Li
Baidu Inc., Beijing, China
M
Mengqi Zhang
Shandong University, Shandong, China
J
Jun Feng
Chinese Academy of Sciences, Beijing, China
R
Ran Chen
Peking University, Beijing, China
Y
Ying Zhou
Shandong University, Shandong, China
Zhumin Chen
Zhumin Chen
Shandong University
Shuaiqiang Wang
Shuaiqiang Wang
Principal Architect of Search Strategy, Baidu Inc.
Large language modelsInformation retrieval
Dawei Yin
Dawei Yin
Senior Director, Head of Search Science at Baidu
Machine LearningWeb MiningData Mining
Z
Zhaochun Ran
Leiden University, Leiden, Netherlands
Xin Xin
Xin Xin
Electrical and Computer Engineering, University of Central Florida
MemoryComputer Architecture