π€ AI Summary
This work addresses the problem of collaborative associative memory optimization among multiple agents in distributed settings, where each agent maintains a local associative memory (AM) model and selectively acquires information from others to support joint recall. To this end, we propose a distributed online convex optimization framework based on routing-tree communication, equipped with a distributed gradient update mechanism that continuously adapts local AM models under dynamic environments. Our key contribution is the first introduction of sublinear regret guarantees into the distributed associative memory domain, accompanied by rigorous theoretical convergence analysis. Experimental results demonstrate that the proposed method significantly outperforms existing online optimization baselines in both recall accuracy and convergence speed, achieving a favorable balance between theoretical soundness and practical effectiveness.
π Abstract
An associative memory (AM) enables cue-response recall, and associative memorization has recently been noted to underlie the operation of modern neural architectures such as Transformers. This work addresses a distributed setting where agents maintain a local AM to recall their own associations as well as selective information from others. Specifically, we introduce a distributed online gradient descent method that optimizes local AMs at different agents through communication over routing trees. Our theoretical analysis establishes sublinear regret guarantees, and experiments demonstrate that the proposed protocol consistently outperforms existing online optimization baselines.