🤖 AI Summary
Existing cognitive map models typically treat self-motion solely as a positional update signal, relying on sensory anchoring for stability—rendering them vulnerable to sensory degradation or conflict. This work redefines self-motion as a generative structural prior that actively constrains the geometric organization of cognitive maps, thereby enhancing robustness under ambiguous perception. Methodologically, we propose a spiking neural circuit grounded in predictive coding, integrating path integration, simulated modulation, adaptive thresholding, and an efficient recursive mechanism. To our knowledge, this is the first framework to shift self-motion’s role from a mere “update signal” to an active “structural constraint.” The approach enables zero-shot generalization to unseen environments and is validated on a real quadrupedal robot. Results demonstrate significant improvements in local topological fidelity, global localization accuracy, step-wise prediction precision, and robustness of landmark-based navigation in dynamic settings.
📝 Abstract
Most computational accounts of cognitive maps assume that stability is achieved primarily through sensory anchoring, with self-motion contributing to incremental positional updates only. However, biological spatial representations often remain coherent even when sensory cues degrade or conflict, suggesting that self-motion may play a deeper organizational role. Here, we show that self-motion can act as a structural prior that actively organizes the geometry of learned cognitive maps. We embed a path-integration-based motion prior in a predictive-coding framework, implemented using a capacity-efficient, brain-inspired recurrent mechanism combining spiking dynamics, analog modulation and adaptive thresholds. Across highly aliased, dynamically changing and naturalistic environments, this structural prior consistently stabilizes map formation, improving local topological fidelity, global positional accuracy and next-step prediction under sensory ambiguity. Mechanistic analyses reveal that the motion prior itself encodes geometrically precise trajectories under tight constraints of internal states and generalizes zero-shot to unseen environments, outperforming simpler motion-based constraints. Finally, deployment on a quadrupedal robot demonstrates that motion-derived structural priors enhance online landmark-based navigation under real-world sensory variability. Together, these results reframe self-motion as an organizing scaffold for coherent spatial representations, showing how brain-inspired principles can systematically strengthen spatial intelligence in embodied artificial agents.