🤖 AI Summary
Adaptive-RAG (A-RAG) suffers from severe inference inefficiency in multi-turn retrieval due to highly overlapping retrieval results, leading to redundant encoding and computation. Method: We propose a model-agnostic, efficient A-RAG framework comprising: (1) a retrieval-overlap-aware representation compression mechanism that caches and reuses embeddings to eliminate redundant prefill; (2) an instruction-driven dynamic attention guidance module that explicitly modulates LLM attention weights over retrieved content; and (3) parallel sequence generation coupled with conditional attention control to accelerate both prefill and decode phases. No fine-tuning is required, and the framework is compatible with diverse LLMs and retrievers. Contribution/Results: Experiments show that our method maintains generation quality while achieving average speedups of 2.79× in prefill and 2.33× in decode, significantly improving end-to-end inference efficiency.
📝 Abstract
Retrieval-augmented generation (RAG) has emerged as a pivotal method for expanding the knowledge of large language models. To handle complex queries more effectively, researchers developed Adaptive-RAG (A-RAG) to enhance the generated quality through multiple interactions with external knowledge bases. Despite its effectiveness, A-RAG exacerbates the pre-existing efficiency challenges inherent in RAG, which are attributable to its reliance on multiple iterations of generation. Existing A-RAG approaches process all retrieved contents from scratch. However, they ignore the situation where there is a significant overlap in the content of the retrieval results across rounds. The overlapping content is redundantly represented, which leads to a large proportion of repeated computations, thus affecting the overall efficiency. To address this issue, this paper introduces a model-agnostic approach that can be generally applied to A-RAG methods, which is dedicated to reducing the redundant representation process caused by the overlapping of retrieval results. Specifically, we use cache access and parallel generation to speed up the prefilling and decoding stages respectively. Additionally, we also propose an instruction-driven module to further guide the model to more effectively attend to each part of the content in a more suitable way for LLMs. Experiments show that our approach achieves 2.79 and 2.33 times significant acceleration on average for prefilling and decoding respectively while maintaining equal generation quality.