Tiny-Critic RAG: Empowering Agentic Fallback with Parameter-Efficient Small Language Models

📅 2026-02-28
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the high latency and computational cost of existing reflective RAG systems, which rely on large language models (LLMs) for binary routing decisions—limitations that hinder their applicability in high-throughput or agent-based scenarios. To overcome this, the authors propose a parameter-efficient, lightweight small language model (SLM) as a deterministic gating mechanism. For the first time, this approach integrates LoRA fine-tuning, constrained decoding, and a no-thought reasoning paradigm to achieve low-latency, high-accuracy routing of retrieval results. Evaluated under a noise-injected benchmark framework, the method matches the routing accuracy of GPT-4o-mini while reducing latency by an order of magnitude, substantially improving the cost-efficiency of deployment in intelligent agent systems.

Technology Category

Application Category

📝 Abstract
Retrieval-Augmented Generation (RAG) grounds Large Language Models (LLMs) to mitigate factual hallucinations. Recent paradigms shift from static pipelines to Modular and Agentic RAG frameworks, granting models autonomy for multi-hop reasoning or self-correction. However, current reflective RAG heavily relies on massive LLMs as universal evaluators. In high-throughput systems, executing complete forward passes for billion-parameter models merely for binary routing introduces severe computational redundancy. Furthermore, in autonomous agent scenarios, inaccurate retrieval causes models to expend excessive tokens on spurious reasoning and redundant tool calls, inflating Time-to-First-Token (TTFT) and costs. We propose Tiny-Critic RAG, decoupling evaluation by deploying a parameter-efficient Small Language Model (SLM) via Low-Rank Adaptation (LoRA). Acting as a deterministic gatekeeper, Tiny-Critic employs constrained decoding and non-thinking inference modes for ultra-low latency binary routing. Evaluations on noise-injected datasets demonstrate Tiny-Critic RAG achieves routing accuracy comparable to GPT-4o-mini while reducing latency by an order of magnitude, establishing a highly cost-effective paradigm for agent deployment.
Problem

Research questions and friction points this paper is trying to address.

Retrieval-Augmented Generation
Large Language Models
computational redundancy
latency
autonomous agents
Innovation

Methods, ideas, or system contributions that make the work stand out.

Tiny-Critic RAG
Small Language Model (SLM)
Low-Rank Adaptation (LoRA)
Agentic RAG
Parameter-Efficient Routing
🔎 Similar Papers
No similar papers found.
Yichao Wu
Yichao Wu
SenseTime Group Limited
AGILLMComputer VisionFace recognition
P
Penghao Liang
Northeastern University
Y
Yafei Xiang
Northeastern University
M
Mengwei Yuan
Independent Researcher
Jianan Liu
Jianan Liu
Unknown affiliation
Signal ProcessingDeep LearningSensing and PerceptionAutonomous DrivingMedical Imaging
J
Jing Yang
Washington University in St. Louis
X
Xianyou Li
New York University
W
Weiran Yan
Independent Researcher