🤖 AI Summary
Can generative retrieval (GR) overcome the representation and optimization bottlenecks inherent in dense retrieval (DR)? This work systematically investigates the fundamental differences between GR and DR—in learning objectives, representational capacity, and optimization mechanisms—from both theoretical and empirical perspectives. We show that GR employs globally normalized maximum-likelihood optimization, directly modeling corpus distributions and relevance relationships, thereby avoiding the optimization drift commonly observed in DR; moreover, its representational capacity scales with model size and is not constrained by low-rank embedding assumptions. Experiments on Natural Questions and MS MARCO demonstrate GR’s superior scalability and theoretical advantages, though its current performance does not yet consistently surpass state-of-the-art DR methods. The study identifies critical influences of negative sampling strategies, model scale, and bilinear similarity design, and proposes concrete directions for advancing GR toward practical deployment.
📝 Abstract
Generative retrieval (GR) has emerged as a new paradigm in neural information retrieval, offering an alternative to dense retrieval (DR) by directly generating identifiers of relevant documents. In this paper, we theoretically and empirically investigate how GR fundamentally diverges from DR in both learning objectives and representational capacity. GR performs globally normalized maximum-likelihood optimization and encodes corpus and relevance information directly in the model parameters, whereas DR adopts locally normalized objectives and represents the corpus with external embeddings before computing similarity via a bilinear interaction. Our analysis suggests that, under scaling, GR can overcome the inherent limitations of DR, yielding two major benefits. First, with larger corpora, GR avoids the sharp performance degradation caused by the optimization drift induced by DR's local normalization. Second, with larger models, GR's representational capacity scales with parameter size, unconstrained by the global low-rank structure that limits DR. We validate these theoretical insights through controlled experiments on the Natural Questions and MS MARCO datasets, across varying negative sampling strategies, embedding dimensions, and model scales. But despite its theoretical advantages, GR does not universally outperform DR in practice. We outline directions to bridge the gap between GR's theoretical potential and practical performance, providing guidance for future research in scalable and robust generative retrieval.