🤖 AI Summary
This work addresses the limited representational capacity of implicit neural representations (INRs) in modeling complex signals, solving inverse problems, and numerically approximating partial differential equations (PDEs). To overcome this limitation, we propose a novel three-dimensional structural super-expressive network (SEN) that jointly exploits width, depth, and “height” dimensions—eliminating reliance on conventional nonlinear activation functions. Our architecture integrates the super-expressive mechanism of Zhang et al. (NeurIPS 2022) with the INR paradigm, enabling unified modeling across scientific computing and vision tasks—including signal reconstruction, physics-driven inverse inference, and PDE solving. Extensive experiments demonstrate that SEN consistently outperforms state-of-the-art INR methods employing sophisticated activation functions across diverse benchmarks. It achieves superior signal representation fidelity, generalization capability, and capacity to learn and enforce physical constraints—thereby advancing the expressivity and applicability of INRs in science-informed learning.
📝 Abstract
In this study, we examine the potential of one of the ``superexpressive'' networks in the context of learning neural functions for representing complex signals and performing machine learning downstream tasks. Our focus is on evaluating their performance on computer vision and scientific machine learning tasks including signal representation/inverse problems and solutions of partial differential equations. Through an empirical investigation in various benchmark tasks, we demonstrate that superexpressive networks, as proposed by [Zhang et al. NeurIPS, 2022], which employ a specialized network structure characterized by having an additional dimension, namely width, depth, and ``height'', can surpass recent implicit neural representations that use highly-specialized nonlinear activation functions.