π€ AI Summary
This paper addresses the challenge of collaborative associative memory construction by multi-agent systems operating under time-varying data streams. We propose the Distributed Dynamic Associative Memory (DDAM) framework, wherein each agent maintains a local associative memory and selectively incorporates information from others via an interest-weighted matrix. Efficient online learning and communication coordination are achieved through a tree-structured topology and composite routing. To our knowledge, DDAM is the first to integrate distributed associative memory with online convex optimization; we design the DDAM-TOGD algorithm and provide theoretical guarantees on both static and dynamic regret. Experiments demonstrate that DDAM significantly outperforms consensus-based distributed online learning baselines in memory accuracy and robustness to environmental perturbations, validating its effectiveness and practicality in dynamic, heterogeneous, and resource-constrained multi-agent settings.
π Abstract
An associative memory (AM) enables cue-response recall, and it has recently been recognized as a key mechanism underlying modern neural architectures such as Transformers. In this work, we introduce the concept of distributed dynamic associative memory (DDAM), which extends classical AM to settings with multiple agents and time-varying data streams. In DDAM, each agent maintains a local AM that must not only store its own associations but also selectively memorize information from other agents based on a specified interest matrix. To address this problem, we propose a novel tree-based distributed online gradient descent algorithm, termed DDAM-TOGD, which enables each agent to update its memory on the fly via inter-agent communication over designated routing trees. We derive rigorous performance guarantees for DDAM-TOGD, proving sublinear static regret in stationary environments and a path-length dependent dynamic regret bound in non-stationary environments. These theoretical results provide insights into how communication delays and network structure impact performance. Building on the regret analysis, we further introduce a combinatorial tree design strategy that optimizes the routing trees to minimize communication delays, thereby improving regret bounds. Numerical experiments demonstrate that the proposed DDAM-TOGD framework achieves superior accuracy and robustness compared to representative online learning baselines such as consensus-based distributed optimization, confirming the benefits of the proposed approach in dynamic, distributed environments.