E-mem: Multi-agent based Episodic Context Reconstruction for LLM Agent Memory

📅 2026-01-29
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge that existing large language model agents often compromise contextual integrity during memory preprocessing due to sequence compression, thereby hindering long-range, high-precision System 2 reasoning. To overcome this limitation, the authors propose E-mem, a novel framework that abandons conventional compression paradigms in favor of a biologically inspired, heterogeneous, hierarchical multi-agent architecture. In E-mem, a central orchestrator agent coordinates multiple assistant agents that preserve uncompressed memory and perform localized context-activated reasoning and evidence aggregation. This approach enables lossless context retention and on-demand inference, achieving an F1 score exceeding 54% on the LoCoMo benchmark—7.75% higher than the current state-of-the-art GAM method—while reducing token consumption by over 70%.

Technology Category

Application Category

📝 Abstract
The evolution of Large Language Model (LLM) agents towards System~2 reasoning, characterized by deliberative, high-precision problem-solving, requires maintaining rigorous logical integrity over extended horizons. However, prevalent memory preprocessing paradigms suffer from destructive de-contextualization. By compressing complex sequential dependencies into pre-defined structures (e.g., embeddings or graphs), these methods sever the contextual integrity essential for deep reasoning. To address this, we propose E-mem, a framework shifting from Memory Preprocessing to Episodic Context Reconstruction. Inspired by biological engrams, E-mem employs a heterogeneous hierarchical architecture where multiple assistant agents maintain uncompressed memory contexts, while a central master agent orchestrates global planning. Unlike passive retrieval, our mechanism empowers assistants to locally reason within activated segments, extracting context-aware evidence before aggregation. Evaluations on the LoCoMo benchmark demonstrate that E-mem achieves over 54\% F1, surpassing the state-of-the-art GAM by 7.75\%, while reducing token cost by over 70\%.
Problem

Research questions and friction points this paper is trying to address.

LLM agents
memory preprocessing
contextual integrity
System 2 reasoning
de-contextualization
Innovation

Methods, ideas, or system contributions that make the work stand out.

Episodic Context Reconstruction
Multi-agent Memory
LLM Agent Reasoning
Context Integrity
Heterogeneous Hierarchical Architecture
🔎 Similar Papers
No similar papers found.
K
Kaixiang Wang
Shanghai Jiao Tong University, China
Y
Yidan Lin
Shanghai Jiao Tong University, China
Jiong Lou
Jiong Lou
Research Assistant Professor, Shanghai Jiao Tong University
Edge computingBlockchain
Z
Zhaojiacheng Zhou
Shanghai Jiao Tong University, China
B
Bunyod Suvonov
Shanghai Jiao Tong University, China
Jie Li
Jie Li
IEEEF, Chair Professor in CS, Shanghai Jiao Tong University
Big Data & AIBlockchainNetwork System and SecurityOS.