🤖 AI Summary
This work proposes SPINONet, a neuroscience-inspired separable physics-informed neural operator designed for power-constrained edge and embedded applications. Addressing the high energy cost of redundant computations in conventional physics-informed operators, SPINONet introduces regression-friendly spiking neurons into the framework for the first time, synergistically integrating event-driven sparse computation with a separable operator architecture. This approach significantly reduces redundant operations while preserving continuous differentiability of spatiotemporal derivatives. The method supports training under pure physical constraints or hybrid data supervision, achieving accuracy comparable to state-of-the-art methods across multiple computational mechanics tasks. Notably, it substantially lowers computational load and energy consumption and effectively avoids spurious solutions in data-scarce regimes.
📝 Abstract
Energy efficiency remains a critical challenge in deploying physics-informed operator learning models for computational mechanics and scientific computing, particularly in power-constrained settings such as edge and embedded devices, where repeated operator evaluations in dense networks incur substantial computational and energy costs. To address this challenge, we introduce the Separable Physics-informed Neuroscience-inspired Operator Network (SPINONet), a neuroscience-inspired framework that reduces redundant computation across repeated evaluations while remaining compatible with physics-informed training. SPINONet incorporates regression-friendly neuroscience-inspired spiking neurons through an architecture-aware design that enables sparse, event-driven computation, improving energy efficiency while preserving the continuous, coordinate-differentiable pathways required for computing spatio-temporal derivatives. We evaluate SPINONet on a range of partial differential equations representative of computational mechanics problems, with spatial, temporal, and parametric dependencies in both time-dependent and steady-state settings, and demonstrate predictive performance comparable to conventional physics-informed operator learning approaches despite the induced sparse communication. In addition, limited data supervision in a hybrid setup is shown to improve performance in challenging regimes where purely physics-informed training may converge to spurious solutions. Finally, we provide an analytical discussion linking architectural components and design choices of SPINONet to reductions in computational load and energy consumption.