Beyond Static Summarization: Proactive Memory Extraction for LLM Agents

πŸ“… 2026-01-08
πŸ›οΈ arXiv.org
πŸ“ˆ Citations: 1
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
Current memory retrieval approaches for large language model agents predominantly rely on static, single-pass summarization, which lacks awareness of task-specific requirements and factual verification, often leading to critical information loss and error propagation. This work proposes ProMem, an active memory retrieval framework that transcends the conventional β€œretrieve-once-and-early” paradigm by modeling memory access as a task-driven, iterative cognitive process. Through a self-querying mechanism, ProMem establishes a recursive feedback loop to dynamically inspect dialogue history, enabling recovery of missing information and correction of inaccuracies. Experimental results demonstrate that ProMem substantially enhances memory completeness and question-answering accuracy while achieving a more favorable trade-off between retrieval quality and token consumption.

Technology Category

Application Category

πŸ“ Abstract
Memory management is vital for LLM agents to handle long-term interaction and personalization. Most research focuses on how to organize and use memory summary, but often overlooks the initial memory extraction stage. In this paper, we argue that existing summary-based methods have two major limitations based on the recurrent processing theory. First, summarization is"ahead-of-time", acting as a blind"feed-forward"process that misses important details because it doesn't know future tasks. Second, extraction is usually"one-off", lacking a feedback loop to verify facts, which leads to the accumulation of information loss. To address these issues, we propose proactive memory extraction (namely ProMem). Unlike static summarization, ProMem treats extraction as an iterative cognitive process. We introduce a recurrent feedback loop where the agent uses self-questioning to actively probe the dialogue history. This mechanism allows the agent to recover missing information and correct errors. Our ProMem significantly improves the completeness of the extracted memory and QA accuracy. It also achieves a superior trade-off between extraction quality and token cost.
Problem

Research questions and friction points this paper is trying to address.

memory extraction
LLM agents
static summarization
information loss
feedback loop
Innovation

Methods, ideas, or system contributions that make the work stand out.

proactive memory extraction
recurrent feedback loop
self-questioning
LLM agents
memory summarization
C
Chengyuan Yang
State Key Laboratory for Novel Software Technology, Nanjing University, China
Zequn Sun
Zequn Sun
Nanjing University
Knowledge GraphLarge Language Model
W
Wei Wei
State Key Laboratory for Novel Software Technology, Nanjing University, China
Wei Hu
Wei Hu
Nanjing University
Knowledge GraphDatabaseNLPDigital Health