Knowledge Access Beats Model Size: Memory Augmented Routing for Persistent AI Agents

📅 2026-03-24
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the high inference cost and inefficiency caused by repetitive user queries in large language models, which existing approaches fail to mitigate by leveraging redundant information in historical dialogues. The authors propose a memory-augmented inference framework that employs a lightweight model combined with hybrid retrieval (BM25 and cosine similarity) to retrieve relevant historical context. A confidence-based routing mechanism directs queries to a low-cost response path, minimizing frequent calls to the large model. Evaluated on LoCoMo and LongMemEval benchmarks, the method achieves 30.5% F1 using only an 8B-parameter model with memory—recovering 69% of the performance of a full-context 235B model—while reducing inference costs by 96%. Hybrid retrieval further improves F1 by 7.7 points. This study demonstrates for the first time that in persistent AI agents, an efficient memory mechanism can substantially enhance both inference efficiency and accuracy without additional training or annotated data, revealing that improved knowledge access outperforms mere model scaling.

Technology Category

Application Category

📝 Abstract
Production AI agents frequently receive user-specific queries that are highly repetitive, with up to 47\% being semantically similar to prior interactions, yet each query is typically processed with the same computational cost. We argue that this redundancy can be exploited through conversational memory, transforming repetition from a cost burden into an efficiency advantage. We propose a memory-augmented inference framework in which a lightweight 8B-parameter model leverages retrieved conversational context to answer all queries via a low-cost inference path. Without any additional training or labeled data, this approach achieves 30.5\% F1, recovering 69\% of the performance of a full-context 235B model while reducing effective cost by 96\%. Notably, a 235B model without memory (13.7\% F1) underperforms even the standalone 8B model (15.4\% F1), indicating that for user-specific queries, access to relevant knowledge outweighs model scale. We further analyze the role of routing and confidence. At practical confidence thresholds, routing alone already directs 96\% of queries to the small model, but yields poor accuracy (13.0\% F1) due to confident hallucinations. Memory does not substantially alter routing decisions; instead, it improves correctness by grounding responses in retrieved user-specific information. As conversational memory accumulates over time, coverage of recurring topics increases, further narrowing the performance gap. We evaluate on 152 LoCoMo questions (Qwen3-8B/235B) and 500 LongMemEval questions. Incorporating hybrid retrieval (BM25 + cosine similarity) improves performance by an additional +7.7 F1, demonstrating that retrieval quality directly enhances end-to-end system performance. Overall, our results highlight that memory, rather than model size, is the primary driver of accuracy and efficiency in persistent AI agents.
Problem

Research questions and friction points this paper is trying to address.

conversational memory
query redundancy
persistent AI agents
memory-augmented inference
user-specific queries
Innovation

Methods, ideas, or system contributions that make the work stand out.

memory-augmented inference
conversational memory
model routing
retrieval-augmented generation
persistent AI agents
🔎 Similar Papers
No similar papers found.
X
Xunzhuo Liu
vLLM Semantic Router Project
Bowei He
Bowei He
City University of Hong Kong, MBZUAI
Data MiningLanguage ModelGenAI4ScienceAgentic AI
X
Xue Liu
vLLM Semantic Router Project, MBZUAI, McGill University, Mila
Andy Luo
Andy Luo
Unknown affiliation
H
Haichen Zhang
AMD
H
Huamin Chen
Red Hat