🤖 AI Summary
Traditional continuous attractor networks (CANs) face a fundamental stability–resolution trade-off: they struggle to simultaneously maintain high robustness against neural noise and heterogeneity while achieving fine-grained spatial resolution. To address this, we propose a grid-cell-inspired sparse binary distributed coding framework that integrates random feature mapping with nonlinear manifold embedding. This maps continuous 2D position variables into a high-dimensional sparse binary space, enabling periodic receptive field construction and programmable path integration. Theoretical analysis and large-scale simulations demonstrate that, under biologically realistic constraints, the encoding significantly enhances CAN robustness—without sacrificing sub-grid-scale spatial resolution—and supports flexible vector-field integration in complex geometric environments. Crucially, our work breaks from the classical CAN paradigm by unifying noise resilience with high-fidelity spatial representation for the first time.
📝 Abstract
Continuous attractor networks (CANs) are widely used to model how the brain temporarily retains continuous behavioural variables via persistent recurrent activity, such as an animal's position in an environment. However, this memory mechanism is very sensitive to even small imperfections, such as noise or heterogeneity, which are both common in biological systems. Previous work has shown that discretising the continuum into a finite set of discrete attractor states provides robustness to these imperfections, but necessarily reduces the resolution of the represented variable, creating a dilemma between stability and resolution. We show that this stability-resolution dilemma is most severe for CANs using unimodal bump-like codes, as in traditional models. To overcome this, we investigate sparse binary distributed codes based on random feature embeddings, in which neurons have spatially-periodic receptive fields. We demonstrate theoretically and with simulations that such grid-cell-like codes enable CANs to achieve both high stability and high resolution simultaneously. The model extends to embedding arbitrary nonlinear manifolds into a CAN, such as spheres or tori, and generalises linear path integration to integration along freely-programmable on-manifold vector fields. Together, this work provides a theory of how the brain could robustly represent continuous variables with high resolution and perform flexible computations over task-relevant manifolds.