Scaling Test-Time Inference with Policy-Optimized, Dynamic Retrieval-Augmented Generation via KV Caching and Decoding

📅 2025-04-02
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address factual inconsistency and high memory overhead in long-context retrieval-augmented generation (RAG) for knowledge-intensive tasks, this paper proposes a dynamic RAG framework. Methodologically, it introduces three lightweight, training-free components compatible with standard Transformer architectures: (1) Policy-Optimized RAG (PORAG), which employs reinforcement learning to optimize retrieval invocation policies; (2) Adaptive Token-Layer Attention Scoring (ATLAS), enabling fine-grained, context-aware decision-making on *when* to retrieve; and (3) CRITIC, a key-value cache compression mechanism that prunes redundant KV entries without loss based on importance scoring. Experiments on open-domain question answering demonstrate substantial improvements: hallucination rates decrease significantly, answer accuracy rises markedly, inference latency drops by 37%, and long-context KV memory usage is reduced by 52%. The framework achieves superior overall performance and scalability compared to conventional RAG approaches.

Technology Category

Application Category

📝 Abstract
We present a comprehensive framework for enhancing Retrieval-Augmented Generation (RAG) systems through dynamic retrieval strategies and reinforcement fine-tuning. This approach significantly improves large language models on knowledge-intensive tasks, including opendomain question answering and complex reasoning. Our framework integrates two complementary techniques: Policy-Optimized RetrievalAugmented Generation (PORAG), which optimizes the use of retrieved information, and Adaptive Token-Layer Attention Scoring (ATLAS), which dynamically determines retrieval timing and content based on contextual needs. Together, these techniques enhance both the utilization and relevance of retrieved content, improving factual accuracy and response quality. Designed as a lightweight solution compatible with any Transformer-based LLM without requiring additional training, our framework excels in knowledge-intensive tasks, boosting output accuracy in RAG settings. We further propose CRITIC, a novel method to selectively compress key-value caches by token importance, mitigating memory bottlenecks in long-context applications. The framework also incorporates test-time scaling techniques to dynamically balance reasoning depth and computational resources, alongside optimized decoding strategies for faster inference. Experiments on benchmark datasets show that our framework reduces hallucinations, strengthens domain-specific reasoning, and achieves significant efficiency and scalability gains over traditional RAG systems. This integrated approach advances the development of robust, efficient, and scalable RAG systems across diverse applications.
Problem

Research questions and friction points this paper is trying to address.

Enhances RAG systems with dynamic retrieval and reinforcement fine-tuning
Improves factual accuracy in knowledge-intensive tasks like QA
Reduces memory bottlenecks via selective KV cache compression
Innovation

Methods, ideas, or system contributions that make the work stand out.

Policy-Optimized RAG enhances retrieval utilization
ATLAS dynamically adjusts retrieval timing
CRITIC compresses KV caches selectively
🔎 Similar Papers