🤖 AI Summary
Classical Hopfield networks exhibit associative memory capabilities, yet their local information-processing mechanisms remain poorly understood. Method: This paper introduces Partial Information Decomposition (PID) theory into associative memory modeling for the first time, proposing “redundancy maximization” as a principled learning objective. It establishes an information-theoretic, neuron-level learning rule by directly optimizing shared redundant information among inputs, enabling interpretable control over individual neuron contributions. Contribution/Results: The resulting model achieves a memory capacity of 1.59, exceeding that of the classical Hopfield network by over one order of magnitude and outperforming state-of-the-art variants. These results empirically validate redundancy maximization as a fundamental, generalizable learning principle for associative memory systems.
📝 Abstract
Associative memory, traditionally modeled by Hopfield networks, enables the retrieval of previously stored patterns from partial or noisy cues. Yet, the local computational principles which are required to enable this function remain incompletely understood. To formally characterize the local information processing in such systems, we employ a recent extension of information theory - Partial Information Decomposition (PID). PID decomposes the contribution of different inputs to an output into unique information from each input, redundant information across inputs, and synergistic information that emerges from combining different inputs. Applying this framework to individual neurons in classical Hopfield networks we find that below the memory capacity, the information in a neuron's activity is characterized by high redundancy between the external pattern input and the internal recurrent input, while synergy and unique information are close to zero until the memory capacity is surpassed and performance drops steeply. Inspired by this observation, we use redundancy as an information-theoretic learning goal, which is directly optimized for each neuron, dramatically increasing the network's memory capacity to 1.59, a more than tenfold improvement over the 0.14 capacity of classical Hopfield networks and even outperforming recent state-of-the-art implementations of Hopfield networks. Ultimately, this work establishes redundancy maximization as a new design principle for associative memories and opens pathways for new associative memory models based on information-theoretic goals.