🤖 AI Summary
Existing associative memory models rely on fixed similarity metrics (e.g., Euclidean distance), which fail to guarantee semantically strongest associations between queries and retrieved items, resulting in suboptimal correctness. To address this, we formulate the query as a generative variant of stored patterns and model its latent distribution via variational inference. Within a maximum a posteriori (MAP) estimation framework, we jointly learn an adaptive similarity function that enables the network to approximate the likelihood of the underlying generative process. This mechanism is first integrated into a novel Adaptive Hopfield Network (A-Hop), overcoming fundamental limitations of conventional metric-based retrieval. A-Hop achieves state-of-the-art performance under challenging conditions—including noise, occlusion, and bias—demonstrating robustness and generalization. Extensive experiments validate its superiority across memory retrieval, tabular/image classification, and multi-instance learning tasks, consistently outperforming prior methods in both accuracy and generalization capability.
📝 Abstract
Associative memory models are content-addressable memory systems fundamental to biological intelligence and are notable for their high interpretability. However, existing models evaluate the quality of retrieval based on proximity, which cannot guarantee that the retrieved pattern has the strongest association with the query, failing correctness. We reframe this problem by proposing that a query is a generative variant of a stored memory pattern, and define a variant distribution to model this subtle context-dependent generative process. Consequently, correct retrieval should return the memory pattern with the maximum a posteriori probability of being the query's origin. This perspective reveals that an ideal similarity measure should approximate the likelihood of each stored pattern generating the query in accordance with variant distribution, which is impossible for fixed and pre-defined similarities used by existing associative memories. To this end, we develop adaptive similarity, a novel mechanism that learns to approximate this insightful but unknown likelihood from samples drawn from context, aiming for correct retrieval. We theoretically prove that our proposed adaptive similarity achieves optimal correct retrieval under three canonical and widely applicable types of variants: noisy, masked, and biased. We integrate this mechanism into a novel adaptive Hopfield network (A-Hop), and empirical results show that it achieves state-of-the-art performance across diverse tasks, including memory retrieval, tabular classification, image classification, and multiple instance learning.