Training slow silicon neurons to control extremely fast robots with spiking reinforcement learning

📅 2026-01-29
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge of millisecond-level real-time decision-making required for high-speed robots operating in highly dynamic environments by proposing a neuromorphic computing-based hardware-software co-design control approach. The method integrates an event-driven local e-prop learning rule with a spiking neural network featuring fixed random connectivity, implemented on a mixed-signal neuromorphic chip to enable an online spiking reinforcement learning framework. This framework achieves rapid and stable control of a robotic hockey player with only a minimal number of training episodes, demonstrating—for the first time in a real-world high-speed interactive task—the significant advantages of brain-inspired computing in both real-time performance and sample efficiency.

Technology Category

Application Category

📝 Abstract
Air hockey demands split-second decisions at high puck velocities, a challenge we address with a compact network of spiking neurons running on a mixed-signal analog/digital neuromorphic processor. By co-designing hardware and learning algorithms, we train the system to achieve successful puck interactions through reinforcement learning in a remarkably small number of trials. The network leverages fixed random connectivity to capture the task's temporal structure and adopts a local e-prop learning rule in the readout layer to exploit event-driven activity for fast and efficient learning. The result is real-time learning with a setup comprising a computer and the neuromorphic chip in-the-loop, enabling practical training of spiking neural networks for robotic autonomous systems. This work bridges neuroscience-inspired hardware with real-world robotic control, showing that brain-inspired approaches can tackle fast-paced interaction tasks while supporting always-on learning in intelligent machines.
Problem

Research questions and friction points this paper is trying to address.

spiking neural networks
real-time robotic control
neuromorphic computing
fast interaction tasks
reinforcement learning
Innovation

Methods, ideas, or system contributions that make the work stand out.

spiking neural networks
neuromorphic computing
reinforcement learning
event-driven learning
robotic control
🔎 Similar Papers
No similar papers found.
I
Irene Ambrosini
Institute of Neuroinformatics, UZH and ETH Zurich, Switzerland; Istituto Italiano di Tecnologia, Genoa, Italy
I
Ingo Blakowski
Institute of Neuroinformatics, UZH and ETH Zurich, Switzerland; Technical University of Munich, Munich, Germany
Dmitrii Zendrikov
Dmitrii Zendrikov
Institute of Neuroinformatics, UZH and ETH Zurich
computational neuroscienceneuromorphic hardware
C
Cristiano Capone
Natl. Center for Radiation Protection and Computational Physics, Istituto Superiore di Sanit`a, 00161 Rome, Italy
Luna Gava
Luna Gava
Istituto Italiano di Tecnologia
Event-driven perception for robotics
Giacomo Indiveri
Giacomo Indiveri
Institute of Neuroinformatics, University of Zurich and ETH Zurich
Neuromorphic EngineeringNeuroscienceBio-signal processingLearningSpiking Neural Networks
C
Chiara De Luca
Institute of Neuroinformatics, UZH and ETH Zurich, Switzerland; Digital Society Initiative, University of Zurich, Zurich, Switzerland
Chiara Bartolozzi
Chiara Bartolozzi
Researcher, Fondazione Istituto Italiano di Tecnologia
Neuromorphic engineering