Self-motion as a structural prior for coherent and robust formation of cognitive maps

📅 2025-12-22
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing cognitive map models typically treat self-motion solely as a positional update signal, relying on sensory anchoring for stability—rendering them vulnerable to sensory degradation or conflict. This work redefines self-motion as a generative structural prior that actively constrains the geometric organization of cognitive maps, thereby enhancing robustness under ambiguous perception. Methodologically, we propose a spiking neural circuit grounded in predictive coding, integrating path integration, simulated modulation, adaptive thresholding, and an efficient recursive mechanism. To our knowledge, this is the first framework to shift self-motion’s role from a mere “update signal” to an active “structural constraint.” The approach enables zero-shot generalization to unseen environments and is validated on a real quadrupedal robot. Results demonstrate significant improvements in local topological fidelity, global localization accuracy, step-wise prediction precision, and robustness of landmark-based navigation in dynamic settings.

Technology Category

Application Category

📝 Abstract
Most computational accounts of cognitive maps assume that stability is achieved primarily through sensory anchoring, with self-motion contributing to incremental positional updates only. However, biological spatial representations often remain coherent even when sensory cues degrade or conflict, suggesting that self-motion may play a deeper organizational role. Here, we show that self-motion can act as a structural prior that actively organizes the geometry of learned cognitive maps. We embed a path-integration-based motion prior in a predictive-coding framework, implemented using a capacity-efficient, brain-inspired recurrent mechanism combining spiking dynamics, analog modulation and adaptive thresholds. Across highly aliased, dynamically changing and naturalistic environments, this structural prior consistently stabilizes map formation, improving local topological fidelity, global positional accuracy and next-step prediction under sensory ambiguity. Mechanistic analyses reveal that the motion prior itself encodes geometrically precise trajectories under tight constraints of internal states and generalizes zero-shot to unseen environments, outperforming simpler motion-based constraints. Finally, deployment on a quadrupedal robot demonstrates that motion-derived structural priors enhance online landmark-based navigation under real-world sensory variability. Together, these results reframe self-motion as an organizing scaffold for coherent spatial representations, showing how brain-inspired principles can systematically strengthen spatial intelligence in embodied artificial agents.
Problem

Research questions and friction points this paper is trying to address.

Self-motion organizes cognitive map geometry as a structural prior
It stabilizes map formation in ambiguous and changing environments
Motion priors enhance real-world navigation in embodied artificial agents
Innovation

Methods, ideas, or system contributions that make the work stand out.

Self-motion as structural prior for cognitive maps
Predictive coding with spiking dynamics and adaptive thresholds
Enhances navigation under sensory ambiguity in robots
🔎 Similar Papers
No similar papers found.
Y
Yingchao Yu
School of Information and Intelligence Science, Donghua University, Shanghai, China.
P
Pengfei Sun
Department of Electrical and Electronic Engineering, Imperial College London, London, UK.
Y
Yaochu Jin
School of Engineering, Westlake University, Hangzhou, China.
K
Kuangrong Hao
School of Information and Intelligence Science, Donghua University, Shanghai, China.
H
Hao Zhang
School of Engineering, Westlake University, Hangzhou, China.
Y
Yifeng Zhang
Alibaba Group, Hangzhou, China.
Wenxuan Pan
Wenxuan Pan
Institute of Automation, Chinese Academy of Sciences
Brain-inspired Intelligence
W
Wei Chen
School of Engineering, Westlake University, Hangzhou, China.
D
Danyal Akarca
Department of Electrical and Electronic Engineering, Imperial College London, London, UK.
Yuchen Xiao
Yuchen Xiao
Lead of Embodied AI R&D, Unitree | Research Scientist, J.P. Morgan | Ph.D. Northeastern University
Generative ModelsRobot LearningReinforcement LearningMulti-Agent Systems