Redundancy Maximization as a Principle of Associative Memory Learning

📅 2025-11-04
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Classical Hopfield networks exhibit associative memory capabilities, yet their local information-processing mechanisms remain poorly understood. Method: This paper introduces Partial Information Decomposition (PID) theory into associative memory modeling for the first time, proposing “redundancy maximization” as a principled learning objective. It establishes an information-theoretic, neuron-level learning rule by directly optimizing shared redundant information among inputs, enabling interpretable control over individual neuron contributions. Contribution/Results: The resulting model achieves a memory capacity of 1.59, exceeding that of the classical Hopfield network by over one order of magnitude and outperforming state-of-the-art variants. These results empirically validate redundancy maximization as a fundamental, generalizable learning principle for associative memory systems.

Technology Category

Application Category

📝 Abstract
Associative memory, traditionally modeled by Hopfield networks, enables the retrieval of previously stored patterns from partial or noisy cues. Yet, the local computational principles which are required to enable this function remain incompletely understood. To formally characterize the local information processing in such systems, we employ a recent extension of information theory - Partial Information Decomposition (PID). PID decomposes the contribution of different inputs to an output into unique information from each input, redundant information across inputs, and synergistic information that emerges from combining different inputs. Applying this framework to individual neurons in classical Hopfield networks we find that below the memory capacity, the information in a neuron's activity is characterized by high redundancy between the external pattern input and the internal recurrent input, while synergy and unique information are close to zero until the memory capacity is surpassed and performance drops steeply. Inspired by this observation, we use redundancy as an information-theoretic learning goal, which is directly optimized for each neuron, dramatically increasing the network's memory capacity to 1.59, a more than tenfold improvement over the 0.14 capacity of classical Hopfield networks and even outperforming recent state-of-the-art implementations of Hopfield networks. Ultimately, this work establishes redundancy maximization as a new design principle for associative memories and opens pathways for new associative memory models based on information-theoretic goals.
Problem

Research questions and friction points this paper is trying to address.

Understanding local computational principles in associative memory systems
Analyzing information components in Hopfield networks using Partial Information Decomposition
Developing redundancy maximization as learning goal to enhance memory capacity
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses Partial Information Decomposition for analysis
Optimizes redundancy as a learning goal
Increases memory capacity tenfold over classical networks
🔎 Similar Papers
No similar papers found.
M
M. Blumel
Complex Systems Theory, Max Planck Institute for Dynamics and Self-Organization, Göttingen, Germany
A
Andreas C Schneider
Faculty of Physics, Institute for the Dynamics of Complex Systems, University of Göttingen
Valentin Neuhaus
Valentin Neuhaus
Faculty of Physics, Institute for the Dynamics of Complex Systems, University of Göttingen
D
David A. Ehrlich
Göttingen Campus Institute for Dynamics of Biological Networks, University of Göttingen, Göttingen, Germany
M
Marcel Graetz
Champalimaud Centre for the Unknown, Lisbon, Portugal
M
M. Wibral
Göttingen Campus Institute for Dynamics of Biological Networks, University of Göttingen, Göttingen, Germany
Abdullah Makkeh
Abdullah Makkeh
Postdoc, University of Göttingen
Information TheoryNeuroscienceOptimization#unitartucs#unigoe
V
V. Priesemann
Complex Systems Theory, Max Planck Institute for Dynamics and Self-Organization, Göttingen, Germany