🤖 AI Summary
To address low response accuracy, high inference latency, and positional bias in long-context retrieval-augmented generation (RAG) systems, this paper proposes a novel “parallel draft generation–unified verification” paradigm. Our method employs a dual-model collaborative architecture: a lightweight small model processes multiple disjoint subsets of retrieved documents in parallel to generate diverse candidate drafts; a single large language model then performs one-time cross-draft aggregation and consistency verification—reducing token consumption and eliminating positional bias. The approach integrates retrieval augmentation, cross-document perspective aggregation, model distillation, and multi-draft cooperative verification. Evaluated on five standard benchmarks, it achieves state-of-the-art performance across all metrics: on the PubHealth dataset, accuracy improves by 12.97%, and end-to-end latency decreases by 50.83%.
📝 Abstract
Retrieval augmented generation (RAG) combines the generative abilities of large language models (LLMs) with external knowledge sources to provide more accurate and up-to-date responses. Recent RAG advancements focus on improving retrieval outcomes through iterative LLM refinement or self-critique capabilities acquired through additional instruction tuning of LLMs. In this work, we introduce Speculative RAG - a framework that leverages a larger generalist LM to efficiently verify multiple RAG drafts produced in parallel by a smaller, distilled specialist LM. Each draft is generated from a distinct subset of retrieved documents, offering diverse perspectives on the evidence while reducing input token counts per draft. This approach enhances comprehension of each subset and mitigates potential position bias over long context. Our method accelerates RAG by delegating drafting to the smaller specialist LM, with the larger generalist LM performing a single verification pass over the drafts. Extensive experiments demonstrate that Speculative RAG achieves state-of-the-art performance with reduced latency on TriviaQA, MuSiQue, PopQA, PubHealth, and ARC-Challenge benchmarks. It notably enhances accuracy by up to 12.97% while reducing latency by 50.83% compared to conventional RAG systems on PubHealth.