🤖 AI Summary
To address the challenges of AI systems—namely, their dependence on massive training datasets, high energy consumption, and poor environmental adaptability (especially in resource-constrained scenarios such as planetary exploration)—this study proposes a biologically inspired, brain-inspired autonomous navigation framework. The method integrates hippocampal place cell and entorhinal grid cell models with rat-like associative learning mechanisms on neuromorphic hardware, forming a closed-loop perception–action system capable of online, unsupervised spatial representation and real-time decision-making in open-field mazes. It represents the first implementation that jointly models and deploys animal spatial cognition and associative learning mechanisms on a neuromorphic robot without large-scale pretraining. Experimental results demonstrate a ~72% reduction in computational energy consumption compared to deep reinforcement learning, accelerated adaptation to dynamic environments, and robust few-shot, lifelong autonomous navigation.
📝 Abstract
Data-driven Artificial Intelligence (AI) approaches have exhibited remarkable prowess across various cognitive tasks using extensive training data. However, the reliance on large datasets and neural networks presents challenges such as highpower consumption and limited adaptability, particularly in SWaP-constrained applications like planetary exploration. To address these issues, we propose enhancing the autonomous capabilities of intelligent robots by emulating the associative learning observed in animals. Associative learning enables animals to adapt to their environment by memorizing concurrent events. By replicating this mechanism, neuromorphic robots can navigate dynamic environments autonomously, learning from interactions to optimize performance. This paper explores the emulation of associative learning in rodents using neuromorphic robots within open-field maze environments, leveraging insights from spatial cells such as place and grid cells. By integrating these models, we aim to enable online associative learning for spatial tasks in real-time scenarios, bridging the gap between biological spatial cognition and robotics for advancements in autonomous systems.