SEDM: Scalable Self-Evolving Distributed Memory for Agents

📅 2025-09-11
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address memory bloat, noise accumulation, and poor cross-domain generalization in long-running multi-agent systems, this paper proposes a scalable, self-evolving distributed memory framework. Methodologically, it introduces (1) a verifiable write mechanism to ensure memory reliability; (2) utility-driven dynamic memory scheduling for on-demand storage and compression; and (3) cross-domain knowledge distillation and abstract transfer to enable continual learning and multi-hop reasoning in open environments. By transforming the memory system from a static repository into an actively optimized component—leveraging distributed topology and verifiable replay—the framework achieves significant improvements in benchmark evaluations: higher reasoning accuracy, reduced token overhead, and effective suppression of noise propagation. Results demonstrate its efficiency, sustainability, and strong generalization capability across domains.

Technology Category

Application Category

📝 Abstract
Long-term multi-agent systems inevitably generate vast amounts of trajectories and historical interactions, which makes efficient memory management essential for both performance and scalability. Existing methods typically depend on vector retrieval and hierarchical storage, yet they are prone to noise accumulation, uncontrolled memory expansion, and limited generalization across domains. To address these challenges, we present SEDM, Self-Evolving Distributed Memory, a verifiable and adaptive framework that transforms memory from a passive repository into an active, self-optimizing component. SEDM integrates verifiable write admission based on reproducible replay, a self-scheduling memory controller that dynamically ranks and consolidates entries according to empirical utility, and cross-domain knowledge diffusion that abstracts reusable insights to support transfer across heterogeneous tasks. Evaluations on benchmark datasets demonstrate that SEDM improves reasoning accuracy while reducing token overhead compared with strong memory baselines, and further enables knowledge distilled from fact verification to enhance multi-hop reasoning. The results highlight SEDM as a scalable and sustainable memory mechanism for open-ended multi-agent collaboration. The code will be released in the later stage of this project.
Problem

Research questions and friction points this paper is trying to address.

Addresses noise accumulation and uncontrolled expansion in agent memory
Enhances cross-domain generalization for multi-agent systems
Transforms passive memory into active self-optimizing component
Innovation

Methods, ideas, or system contributions that make the work stand out.

Verifiable write admission with reproducible replay
Self-scheduling memory controller for dynamic ranking
Cross-domain knowledge diffusion for transfer learning
🔎 Similar Papers
No similar papers found.
H
Haoran Xu
Gradient, Zhejiang University
J
Jiacong Hu
Gradient, South China University of Technology
K
Ke Zhang
Gradient, Waseda University
L
Lei Yu
Gradient, University of Toronto
Y
Yuxin Tang
Gradient, Rice University
X
Xinyuan Song
Gradient, Emory University
Yiqun Duan
Yiqun Duan
Meta | UTS | UBC
Vision & LanguageMulti-ModalityRoboticsBrain-Computer Interface
L
Lynn Ai
Gradient
Bill Shi
Bill Shi
Applied Scientist
Graph AIComplex NetworksComputational Social Science