🤖 AI Summary
Existing generative retrieval (GR) approaches rely on autoregressive models to generate document IDs token-by-token, suffering from error propagation and an inherent trade-off between efficiency and accuracy. Method: We propose the first GR framework based on discrete diffusion language models, reformulating DocID generation as a parallel denoising process. Our approach employs stochastic masking during training and optimizes a retrieval-aware objective function; at inference time, it supports configurable multi-step denoising, enabling flexible quality–latency trade-offs. Contribution/Results: On standard benchmarks, our method matches the performance of strong autoregressive baselines while overcoming the fundamental limitations of unidirectional generation. It demonstrates, for the first time, the effectiveness and practicality of non-autoregressive diffusion models for end-to-end retrieval—offering improved robustness, parallelization, and controllable inference speed without sacrificing retrieval accuracy.
📝 Abstract
Generative retrieval (GR) re-frames document retrieval as a sequence-based document identifier (DocID) generation task, memorizing documents with model parameters and enabling end-to-end retrieval without explicit indexing. Existing GR methods are based on auto-regressive generative models, i.e., the token generation is performed from left to right. However, such auto-regressive methods suffer from: (1) mismatch between DocID generation and natural language generation, e.g., an incorrect DocID token generated in early left steps would lead to totally erroneous retrieval; and (2) failure to balance the trade-off between retrieval efficiency and accuracy dynamically, which is crucial for practical applications. To address these limitations, we propose generative document retrieval with diffusion language models, dubbed DiffuGR. It models DocID generation as a discrete diffusion process: during training, DocIDs are corrupted through a stochastic masking process, and a diffusion language model is learned to recover them under a retrieval-aware objective. For inference, DiffuGR attempts to generate DocID tokens in parallel and refines them through a controllable number of denoising steps. In contrast to conventional left-to-right auto-regressive decoding, DiffuGR provides a novel mechanism to first generate more confident DocID tokens and refine the generation through diffusion-based denoising. Moreover, DiffuGR also offers explicit runtime control over the qualitylatency tradeoff. Extensive experiments on benchmark retrieval datasets show that DiffuGR is competitive with strong auto-regressive generative retrievers, while offering flexible speed and accuracy tradeoffs through variable denoising budgets. Overall, our results indicate that non-autoregressive diffusion models are a practical and effective alternative for generative document retrieval.