Distributed Dynamic Associative Memory via Online Convex Optimization

πŸ“… 2025-11-28
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
This paper addresses the challenge of collaborative associative memory construction by multi-agent systems operating under time-varying data streams. We propose the Distributed Dynamic Associative Memory (DDAM) framework, wherein each agent maintains a local associative memory and selectively incorporates information from others via an interest-weighted matrix. Efficient online learning and communication coordination are achieved through a tree-structured topology and composite routing. To our knowledge, DDAM is the first to integrate distributed associative memory with online convex optimization; we design the DDAM-TOGD algorithm and provide theoretical guarantees on both static and dynamic regret. Experiments demonstrate that DDAM significantly outperforms consensus-based distributed online learning baselines in memory accuracy and robustness to environmental perturbations, validating its effectiveness and practicality in dynamic, heterogeneous, and resource-constrained multi-agent settings.

Technology Category

Application Category

πŸ“ Abstract
An associative memory (AM) enables cue-response recall, and it has recently been recognized as a key mechanism underlying modern neural architectures such as Transformers. In this work, we introduce the concept of distributed dynamic associative memory (DDAM), which extends classical AM to settings with multiple agents and time-varying data streams. In DDAM, each agent maintains a local AM that must not only store its own associations but also selectively memorize information from other agents based on a specified interest matrix. To address this problem, we propose a novel tree-based distributed online gradient descent algorithm, termed DDAM-TOGD, which enables each agent to update its memory on the fly via inter-agent communication over designated routing trees. We derive rigorous performance guarantees for DDAM-TOGD, proving sublinear static regret in stationary environments and a path-length dependent dynamic regret bound in non-stationary environments. These theoretical results provide insights into how communication delays and network structure impact performance. Building on the regret analysis, we further introduce a combinatorial tree design strategy that optimizes the routing trees to minimize communication delays, thereby improving regret bounds. Numerical experiments demonstrate that the proposed DDAM-TOGD framework achieves superior accuracy and robustness compared to representative online learning baselines such as consensus-based distributed optimization, confirming the benefits of the proposed approach in dynamic, distributed environments.
Problem

Research questions and friction points this paper is trying to address.

Extends associative memory to multi-agent systems with dynamic data streams
Enables agents to selectively memorize information from other agents
Addresses communication delays and network structure impact on performance
Innovation

Methods, ideas, or system contributions that make the work stand out.

Tree-based distributed online gradient descent algorithm
Combinatorial tree design strategy for communication optimization
Dynamic associative memory for multi-agent time-varying data streams
πŸ”Ž Similar Papers
No similar papers found.
B
Bowen Wang
King’s Communications, Learning and Information Processing (KCLIP) Lab, Centre for Intelligent Information Processing Systems, Department of Engineering, King’s College London, London WC2R 2LS, U.K.
Matteo Zecchin
Matteo Zecchin
King's College London
Wireless CommunicationMachine LearningDistributed OptimizationBayesian Learning
Osvaldo Simeone
Osvaldo Simeone
King's College London
Information theorymachine learningquantum information processingwireless systems