🤖 AI Summary
This work addresses the challenge of millisecond-level real-time decision-making required for high-speed robots operating in highly dynamic environments by proposing a neuromorphic computing-based hardware-software co-design control approach. The method integrates an event-driven local e-prop learning rule with a spiking neural network featuring fixed random connectivity, implemented on a mixed-signal neuromorphic chip to enable an online spiking reinforcement learning framework. This framework achieves rapid and stable control of a robotic hockey player with only a minimal number of training episodes, demonstrating—for the first time in a real-world high-speed interactive task—the significant advantages of brain-inspired computing in both real-time performance and sample efficiency.
📝 Abstract
Air hockey demands split-second decisions at high puck velocities, a challenge we address with a compact network of spiking neurons running on a mixed-signal analog/digital neuromorphic processor. By co-designing hardware and learning algorithms, we train the system to achieve successful puck interactions through reinforcement learning in a remarkably small number of trials. The network leverages fixed random connectivity to capture the task's temporal structure and adopts a local e-prop learning rule in the readout layer to exploit event-driven activity for fast and efficient learning. The result is real-time learning with a setup comprising a computer and the neuromorphic chip in-the-loop, enabling practical training of spiking neural networks for robotic autonomous systems. This work bridges neuroscience-inspired hardware with real-world robotic control, showing that brain-inspired approaches can tackle fast-paced interaction tasks while supporting always-on learning in intelligent machines.