🤖 AI Summary
This work addresses the high latency and computational cost of existing reflective RAG systems, which rely on large language models (LLMs) for binary routing decisions—limitations that hinder their applicability in high-throughput or agent-based scenarios. To overcome this, the authors propose a parameter-efficient, lightweight small language model (SLM) as a deterministic gating mechanism. For the first time, this approach integrates LoRA fine-tuning, constrained decoding, and a no-thought reasoning paradigm to achieve low-latency, high-accuracy routing of retrieval results. Evaluated under a noise-injected benchmark framework, the method matches the routing accuracy of GPT-4o-mini while reducing latency by an order of magnitude, substantially improving the cost-efficiency of deployment in intelligent agent systems.
📝 Abstract
Retrieval-Augmented Generation (RAG) grounds Large Language Models (LLMs) to mitigate factual hallucinations. Recent paradigms shift from static pipelines to Modular and Agentic RAG frameworks, granting models autonomy for multi-hop reasoning or self-correction. However, current reflective RAG heavily relies on massive LLMs as universal evaluators. In high-throughput systems, executing complete forward passes for billion-parameter models merely for binary routing introduces severe computational redundancy. Furthermore, in autonomous agent scenarios, inaccurate retrieval causes models to expend excessive tokens on spurious reasoning and redundant tool calls, inflating Time-to-First-Token (TTFT) and costs. We propose Tiny-Critic RAG, decoupling evaluation by deploying a parameter-efficient Small Language Model (SLM) via Low-Rank Adaptation (LoRA). Acting as a deterministic gatekeeper, Tiny-Critic employs constrained decoding and non-thinking inference modes for ultra-low latency binary routing. Evaluations on noise-injected datasets demonstrate Tiny-Critic RAG achieves routing accuracy comparable to GPT-4o-mini while reducing latency by an order of magnitude, establishing a highly cost-effective paradigm for agent deployment.