🤖 AI Summary
This study addresses four core challenges at the intersection of neuroscience, artificial general intelligence (AGI), and neuromorphic computing: (1) integrating spiking dynamics with foundation models; (2) mitigating catastrophic forgetting in lifelong learning; (3) unifying language and sensorimotor learning in embodied agents; and (4) ensuring verifiable ethical guarantees in advanced neuromorphic autonomous systems. Methodologically, it introduces a unified architectural paradigm grounded in biologically inspired synaptic plasticity, sparse spike coding, and multimodal association—innovatively integrating Transformer-based attention mechanisms, foundation model pretraining, multi-agent coordination, and hardware-aware neuromorphic substrates including memristive crossbars, in-memory computing, and photonic/quantum accelerators. The resulting cross-layer framework is the first to holistically integrate these elements, yielding substantial improvements in AGI’s energy efficiency, continual adaptability, and interpretability—thereby establishing both theoretical foundations and implementable pathways for human–machine intelligent co-evolution.
📝 Abstract
This position and survey paper identifies the emerging convergence of neuroscience, artificial general intelligence (AGI), and neuromorphic computing toward a unified research paradigm. Using a framework grounded in brain physiology, we highlight how synaptic plasticity, sparse spike-based communication, and multimodal association provide design principles for next-generation AGI systems that potentially combine both human and machine intelligences. The review traces this evolution from early connectionist models to state-of-the-art large language models, demonstrating how key innovations like transformer attention, foundation-model pre-training, and multi-agent architectures mirror neurobiological processes like cortical mechanisms, working memory, and episodic consolidation. We then discuss emerging physical substrates capable of breaking the von Neumann bottleneck to achieve brain-scale efficiency in silicon: memristive crossbars, in-memory compute arrays, and emerging quantum and photonic devices. There are four critical challenges at this intersection: 1) integrating spiking dynamics with foundation models, 2) maintaining lifelong plasticity without catastrophic forgetting, 3) unifying language with sensorimotor learning in embodied agents, and 4) enforcing ethical safeguards in advanced neuromorphic autonomous systems. This combined perspective across neuroscience, computation, and hardware offers an integrative agenda for in each of these fields.