🤖 AI Summary
This study addresses the dynamic stability of discrete attractor neural networks under high memory load, transcending classical critical capacity limits by introducing the novel concept of “critical load.” Employing a synergistic approach combining nonlinear dynamical systems stability analysis, random matrix theory, and mean-field methods, we perform Jacobian spectral analysis on noisy, hierarchically structured neural activity networks—distinguishing bulk eigenvalue statistics from outlier eigenvalues. We uncover, for the first time, how threshold-linear activation functions cooperate with quasi-sparse activity patterns to enhance stability. This yields a general local stability theory applicable to broad classes of neural activity models. The theory precisely predicts the stability of all fixed points at the critical load, rigorously establishing the feasibility of dynamic stability even under high-density pattern storage. Our framework provides new theoretical principles and a quantitative foundation for understanding the robustness of biological memory.
📝 Abstract
Neural networks storing multiple discrete attractors are canonical models of biological memory. Previously, the dynamical stability of such networks could only be guaranteed under highly restrictive conditions. Here, we derive a theory of the local stability of discrete fixed points in a broad class of networks with graded neural activities and in the presence of noise. By directly analyzing the bulk and outliers of the Jacobian spectrum, we show that all fixed points are stable below a critical load that is distinct from the classical extit{critical capacity} and depends on the statistics of neural activities in the fixed points as well as the single-neuron activation function. Our analysis highlights the computational benefits of threshold-linear activation and sparse-like patterns.