🤖 AI Summary
This work addresses the lack of coordination in existing memory-augmented large language model (LLM) agents during memory construction, retrieval, and utilization, which leads to myopic forward planning and sparse, delayed backward feedback. To overcome these limitations, we propose MemMA, a novel framework that jointly optimizes both forward and backward phases of the memory cycle for the first time. In the forward pass, a Meta-Thinker guides iterative retrieval and memory construction, while the backward pass employs an in-situ self-evolution mechanism that transforms task failures into targeted memory repair actions. Integrating multi-agent collaboration, structured policy guidance, and a plug-and-play memory architecture, MemMA significantly outperforms existing methods on the LoCoMo benchmark, demonstrating compatibility across diverse LLMs and storage backends while delivering immediate performance gains.
📝 Abstract
Memory-augmented LLM agents maintain external memory banks to support long-horizon interaction, yet most existing systems treat construction, retrieval, and utilization as isolated subroutines. This creates two coupled challenges: strategic blindness on the forward path of the memory cycle, where construction and retrieval are driven by local heuristics rather than explicit strategic reasoning, and sparse, delayed supervision on the backward path, where downstream failures rarely translate into direct repairs of the memory bank. To address these challenges, we propose MemMA, a plug-and-play multi-agent framework that coordinates the memory cycle along both the forward and backward paths. On the forward path, a Meta-Thinker produces structured guidance that steers a Memory Manager during construction and directs a Query Reasoner during iterative retrieval. On the backward path, MemMA introduces in-situ self-evolving memory construction, which synthesizes probe QA pairs, verifies the current memory, and converts failures into repair actions before the memory is finalized. Extensive experiments on LoCoMo show that MemMA consistently outperforms existing baselines across multiple LLM backbones and improves three different storage backends in a plug-and-play manner. Our code is publicly available at https://github.com/ventr1c/memma.