🤖 AI Summary
This work addresses the limitations of existing memory mechanisms in multi-agent systems, which often suffer from memory homogenization and information overload due to a lack of role awareness and overly fine-grained storage. To overcome these issues, the authors propose LatentMem, a novel framework that employs a lightweight experience buffer to store interaction trajectories and introduces a learnable latent memory mechanism. A context-conditioned memory synthesizer generates compact, agent-specific memory representations, while a task-signal-driven Latent Memory Policy Optimization (LMPO) method enables efficient compression and role-aware memory management without modifying the underlying architecture. Experimental results demonstrate that LatentMem achieves up to a 19.36% performance improvement over state-of-the-art memory architectures across multiple benchmarks and mainstream multi-agent frameworks.
📝 Abstract
Large language model (LLM)-powered multi-agent systems (MAS) demonstrate remarkable collective intelligence, wherein multi-agent memory serves as a pivotal mechanism for continual adaptation. However, existing multi-agent memory designs remain constrained by two fundamental bottlenecks: (i) memory homogenization arising from the absence of role-aware customization, and (ii) information overload induced by excessively fine-grained memory entries. To address these limitations, we propose LatentMem, a learnable multi-agent memory framework designed to customize agent-specific memories in a token-efficient manner. Specifically, LatentMem comprises an experience bank that stores raw interaction trajectories in a lightweight form, and a memory composer that synthesizes compact latent memories conditioned on retrieved experience and agent-specific contexts. Further, we introduce Latent Memory Policy Optimization (LMPO), which propagates task-level optimization signals through latent memories to the composer, encouraging it to produce compact and high-utility representations. Extensive experiments across diverse benchmarks and mainstream MAS frameworks show that LatentMem achieves a performance gain of up to $19.36$% over vanilla settings and consistently outperforms existing memory architectures, without requiring any modifications to the underlying frameworks.