MAGMA: A Multi-Graph based Agentic Memory Architecture for AI Agents

📅 2026-01-06
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing memory-augmented generative methods rely on single-dimensional semantic similarity retrieval, which struggles to disentangle multidimensional relationships such as temporal, causal, and entity-based dependencies, thereby limiting reasoning accuracy and interpretability. To address this limitation, this work proposes a multi-graph agent memory architecture that, for the first time, decomposes memory into four orthogonal relational graphs—semantic, temporal, causal, and entity—and introduces a strategy-guided cross-graph traversal mechanism to enable query-adaptive structured context retrieval. The proposed approach significantly outperforms existing systems on the LoCoMo and LongMemEval benchmarks, achieving state-of-the-art performance on long-range reasoning tasks while enhancing the transparency and controllability of the reasoning process.

Technology Category

Application Category

📝 Abstract
Memory-Augmented Generation (MAG) extends Large Language Models with external memory to support long-context reasoning, but existing approaches largely rely on semantic similarity over monolithic memory stores, entangling temporal, causal, and entity information. This design limits interpretability and alignment between query intent and retrieved evidence, leading to suboptimal reasoning accuracy. In this paper, we propose MAGMA, a multi-graph agentic memory architecture that represents each memory item across orthogonal semantic, temporal, causal, and entity graphs. MAGMA formulates retrieval as policy-guided traversal over these relational views, enabling query-adaptive selection and structured context construction. By decoupling memory representation from retrieval logic, MAGMA provides transparent reasoning paths and fine-grained control over retrieval. Experiments on LoCoMo and LongMemEval demonstrate that MAGMA consistently outperforms state-of-the-art agentic memory systems in long-horizon reasoning tasks.
Problem

Research questions and friction points this paper is trying to address.

Memory-Augmented Generation
long-context reasoning
memory entanglement
retrieval alignment
interpretability
Innovation

Methods, ideas, or system contributions that make the work stand out.

multi-graph memory
agentic memory architecture
policy-guided retrieval
orthogonal memory representation
long-horizon reasoning