Accelerating Adaptive Retrieval Augmented Generation via Instruction-Driven Representation Reduction of Retrieval Overlaps

📅 2025-05-19
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Adaptive-RAG (A-RAG) suffers from severe inference inefficiency in multi-turn retrieval due to highly overlapping retrieval results, leading to redundant encoding and computation. Method: We propose a model-agnostic, efficient A-RAG framework comprising: (1) a retrieval-overlap-aware representation compression mechanism that caches and reuses embeddings to eliminate redundant prefill; (2) an instruction-driven dynamic attention guidance module that explicitly modulates LLM attention weights over retrieved content; and (3) parallel sequence generation coupled with conditional attention control to accelerate both prefill and decode phases. No fine-tuning is required, and the framework is compatible with diverse LLMs and retrievers. Contribution/Results: Experiments show that our method maintains generation quality while achieving average speedups of 2.79× in prefill and 2.33× in decode, significantly improving end-to-end inference efficiency.

Technology Category

Application Category

📝 Abstract
Retrieval-augmented generation (RAG) has emerged as a pivotal method for expanding the knowledge of large language models. To handle complex queries more effectively, researchers developed Adaptive-RAG (A-RAG) to enhance the generated quality through multiple interactions with external knowledge bases. Despite its effectiveness, A-RAG exacerbates the pre-existing efficiency challenges inherent in RAG, which are attributable to its reliance on multiple iterations of generation. Existing A-RAG approaches process all retrieved contents from scratch. However, they ignore the situation where there is a significant overlap in the content of the retrieval results across rounds. The overlapping content is redundantly represented, which leads to a large proportion of repeated computations, thus affecting the overall efficiency. To address this issue, this paper introduces a model-agnostic approach that can be generally applied to A-RAG methods, which is dedicated to reducing the redundant representation process caused by the overlapping of retrieval results. Specifically, we use cache access and parallel generation to speed up the prefilling and decoding stages respectively. Additionally, we also propose an instruction-driven module to further guide the model to more effectively attend to each part of the content in a more suitable way for LLMs. Experiments show that our approach achieves 2.79 and 2.33 times significant acceleration on average for prefilling and decoding respectively while maintaining equal generation quality.
Problem

Research questions and friction points this paper is trying to address.

Reducing redundant representation in retrieval overlaps
Accelerating prefilling and decoding stages in A-RAG
Improving efficiency without compromising generation quality
Innovation

Methods, ideas, or system contributions that make the work stand out.

Cache access accelerates prefilling stage
Parallel generation speeds up decoding stage
Instruction-driven module optimizes content attention
🔎 Similar Papers
No similar papers found.
J
Jie Ou
School of Information and Software Engineering, University of Electronic Science and Technology of China
Jinyu Guo
Jinyu Guo
University of Electronic Science and Technology of China
Natural Language Processing
S
Shuaihong Jiang
School of Information and Software Engineering, University of Electronic Science and Technology of China
Z
Zhaokun Wang
School of Information and Software Engineering, University of Electronic Science and Technology of China
L
Libo Qin
Central South University
S
Shunyu Yao
Big data and artificial intelligent institute, China Telecom Research Institute
Wenhong Tian
Wenhong Tian
University of Electronic Science and Technology of China
Approximation Algorithms for NP-Hard ProblemsResource SchedulingNetwork Modeling and Performance Optimization