Adaptive Hopfield Network: Rethinking Similarities in Associative Memory

📅 2025-11-25
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing associative memory models rely on fixed similarity metrics (e.g., Euclidean distance), which fail to guarantee semantically strongest associations between queries and retrieved items, resulting in suboptimal correctness. To address this, we formulate the query as a generative variant of stored patterns and model its latent distribution via variational inference. Within a maximum a posteriori (MAP) estimation framework, we jointly learn an adaptive similarity function that enables the network to approximate the likelihood of the underlying generative process. This mechanism is first integrated into a novel Adaptive Hopfield Network (A-Hop), overcoming fundamental limitations of conventional metric-based retrieval. A-Hop achieves state-of-the-art performance under challenging conditions—including noise, occlusion, and bias—demonstrating robustness and generalization. Extensive experiments validate its superiority across memory retrieval, tabular/image classification, and multi-instance learning tasks, consistently outperforming prior methods in both accuracy and generalization capability.

Technology Category

Application Category

📝 Abstract
Associative memory models are content-addressable memory systems fundamental to biological intelligence and are notable for their high interpretability. However, existing models evaluate the quality of retrieval based on proximity, which cannot guarantee that the retrieved pattern has the strongest association with the query, failing correctness. We reframe this problem by proposing that a query is a generative variant of a stored memory pattern, and define a variant distribution to model this subtle context-dependent generative process. Consequently, correct retrieval should return the memory pattern with the maximum a posteriori probability of being the query's origin. This perspective reveals that an ideal similarity measure should approximate the likelihood of each stored pattern generating the query in accordance with variant distribution, which is impossible for fixed and pre-defined similarities used by existing associative memories. To this end, we develop adaptive similarity, a novel mechanism that learns to approximate this insightful but unknown likelihood from samples drawn from context, aiming for correct retrieval. We theoretically prove that our proposed adaptive similarity achieves optimal correct retrieval under three canonical and widely applicable types of variants: noisy, masked, and biased. We integrate this mechanism into a novel adaptive Hopfield network (A-Hop), and empirical results show that it achieves state-of-the-art performance across diverse tasks, including memory retrieval, tabular classification, image classification, and multiple instance learning.
Problem

Research questions and friction points this paper is trying to address.

Existing associative memories fail to guarantee correct retrieval
Fixed similarity measures cannot identify maximum likelihood origin
The paper develops adaptive similarity for optimal pattern association
Innovation

Methods, ideas, or system contributions that make the work stand out.

Adaptive similarity learns likelihood from context samples
A-Hop network integrates adaptive similarity mechanism
Achieves optimal retrieval under noisy masked biased variants
🔎 Similar Papers
No similar papers found.
S
Shurong Wang
Zhejiang University
Y
Yuqi Pan
Institute of Automation, Chinese Academy of Sciences
Z
Zhuoyang Shen
Zhejiang University
M
Meng Zhang
Zhejiang University
H
Hongwei Wang
Zhejiang University
Guoqi Li
Guoqi Li
Professor, Institue of Automation,Chinese Academy of Sciences,Previously Tsinghua University
Brain inspired computingSpiking neural networksBrain inspired large modelsNeuroAI