🤖 AI Summary
Current neuromorphic systems lack a theoretical foundation for processing spatiotemporal signals, resulting in unstable and inefficient training of spiking neural networks (SNNs) for event-based vision tasks. To address this, we propose the first rigorously covariant spatiotemporal receptive field model, integrating biologically inspired scale-space theory to jointly guarantee covariance under spatial affine transformations and temporal scale changes. Methodologically, we model spatial receptive fields using affine Gaussian kernels, capture temporal dynamics via leaky integrators and leaky integrate-and-fire neurons, and formulate an event-driven learning framework. This model endows SNNs with biologically plausible spatiotemporal priors, significantly improving training stability and accuracy on event-camera benchmarks. Our work establishes a scalable, theory-driven computational foundation for brain-inspired efficient perception.
📝 Abstract
Biological nervous systems constitute important sources of inspiration towards computers that are faster, cheaper, and more energy efficient. Neuromorphic disciplines view the brain as a coevolved system, simultaneously optimizing the hardware and the algorithms running on it. There are clear efficiency gains when bringing the computations into a physical substrate, but we presently lack theories to guide efficient implementations. Here, we present a principled computational model for neuromorphic systems in terms of spatio-temporal receptive fields, based on affine Gaussian kernels over space and leaky-integrator and leaky integrate-and-fire models over time. Our theory is provably covariant to spatial affine and temporal scaling transformations, and with close similarities to the visual processing in mammalian brains. We use these spatio-temporal receptive fields as a prior in an event-based vision task, and show that this improves the training of spiking networks, which otherwise is known as problematic for event-based vision. This work combines efforts within scale-space theory and computational neuroscience to identify theoretically well-founded ways to process spatio-temporal signals in neuromorphic systems. Our contributions are immediately relevant for signal processing and event-based vision, and can be extended to other processing tasks over space and time, such as memory and control.