LLM-Independent Adaptive RAG: Let the Question Speak for Itself

📅 2025-05-07
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the high computational overhead and false-positive retrieval issues caused by redundant retrieval in Retrieval-Augmented Generation (RAG), this paper proposes a lightweight, LLM-agnostic adaptive retrieval method. Instead of relying on large language model (LLM)-based uncertainty estimation, our approach dynamically decides whether to trigger retrieval based solely on shallow semantic and structural features of the input question—such as token length, entity density, and interrogative word patterns—combined with interpretable rules and a lightweight classifier. Through systematic analysis of 27 external features and their seven compositional categories, we establish the first LLM-free adaptive retrieval paradigm. Evaluated across six open-domain QA benchmarks, our method achieves question-answering accuracy comparable to state-of-the-art LLM-based adaptive approaches, while reducing inference latency by 62% and decreasing retrieval calls by 58%, thereby significantly enhancing both efficiency and practical deployability.

Technology Category

Application Category

📝 Abstract
Large Language Models~(LLMs) are prone to hallucinations, and Retrieval-Augmented Generation (RAG) helps mitigate this, but at a high computational cost while risking misinformation. Adaptive retrieval aims to retrieve only when necessary, but existing approaches rely on LLM-based uncertainty estimation, which remain inefficient and impractical. In this study, we introduce lightweight LLM-independent adaptive retrieval methods based on external information. We investigated 27 features, organized into 7 groups, and their hybrid combinations. We evaluated these methods on 6 QA datasets, assessing the QA performance and efficiency. The results show that our approach matches the performance of complex LLM-based methods while achieving significant efficiency gains, demonstrating the potential of external information for adaptive retrieval.
Problem

Research questions and friction points this paper is trying to address.

Reducing LLM hallucinations without high computational cost
Eliminating reliance on LLM-based uncertainty estimation
Improving adaptive retrieval efficiency using external information
Innovation

Methods, ideas, or system contributions that make the work stand out.

Lightweight LLM-independent adaptive retrieval methods
Utilizes 27 features organized into 7 groups
Matches LLM-based performance with higher efficiency
🔎 Similar Papers
No similar papers found.