🤖 AI Summary
Robustly supporting multi-timescale mathematical and symbolic computation on brain-inspired hardware remains challenging for recurrent spiking neural networks (RSNNs).
Method: This paper proposes a single-shot weight learning framework that embeds finite-state machines (FSMs) into the attractor dynamics of RSNNs. It innovatively integrates high-dimensional distributed representations, vector binding, and hetero-associative outer-product synthesis—enabling plug-and-play, platform-agnostic symbolic computation without fine-tuning. The method supports both symmetric and asymmetric weight superposition.
Contributions/Results: We demonstrate large-scale FSM deployment on memristor-based closed-loop systems and Intel Loihi 2 chips. Experiments show high computational robustness and hardware scalability under strong non-ideal weight perturbations and hardware nonlinearities. Our approach establishes a novel paradigm for universal symbolic reasoning on neuromorphic hardware.
📝 Abstract
Programming recurrent spiking neural networks (RSNNs) to robustly perform multi-timescale computation remains a difficult challenge. To address this, we describe a single-shot weight learning scheme to embed robust multi-timescale dynamics into attractor-based RSNNs, by exploiting the properties of high-dimensional distributed representations. We embed finite state machines into the RSNN dynamics by superimposing a symmetric autoassociative weight matrix and asymmetric transition terms, which are each formed by the vector binding of an input and heteroassociative outer-products between states. Our approach is validated through simulations with highly nonideal weights; an experimental closed-loop memristive hardware setup; and on Loihi 2, where it scales seamlessly to large state machines. This work introduces a scalable approach to embed robust symbolic computation through recurrent dynamics into neuromorphic hardware, without requiring parameter fine-tuning or significant platform-specific optimisation. Moreover, it demonstrates that distributed symbolic representations serve as a highly capable representation-invariant language for cognitive algorithms in neuromorphic hardware.