Unveiling the Potential of Superexpressive Networks in Implicit Neural Representations

📅 2025-03-27
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the limited representational capacity of implicit neural representations (INRs) in modeling complex signals, solving inverse problems, and numerically approximating partial differential equations (PDEs). To overcome this limitation, we propose a novel three-dimensional structural super-expressive network (SEN) that jointly exploits width, depth, and “height” dimensions—eliminating reliance on conventional nonlinear activation functions. Our architecture integrates the super-expressive mechanism of Zhang et al. (NeurIPS 2022) with the INR paradigm, enabling unified modeling across scientific computing and vision tasks—including signal reconstruction, physics-driven inverse inference, and PDE solving. Extensive experiments demonstrate that SEN consistently outperforms state-of-the-art INR methods employing sophisticated activation functions across diverse benchmarks. It achieves superior signal representation fidelity, generalization capability, and capacity to learn and enforce physical constraints—thereby advancing the expressivity and applicability of INRs in science-informed learning.

Technology Category

Application Category

📝 Abstract
In this study, we examine the potential of one of the ``superexpressive'' networks in the context of learning neural functions for representing complex signals and performing machine learning downstream tasks. Our focus is on evaluating their performance on computer vision and scientific machine learning tasks including signal representation/inverse problems and solutions of partial differential equations. Through an empirical investigation in various benchmark tasks, we demonstrate that superexpressive networks, as proposed by [Zhang et al. NeurIPS, 2022], which employ a specialized network structure characterized by having an additional dimension, namely width, depth, and ``height'', can surpass recent implicit neural representations that use highly-specialized nonlinear activation functions.
Problem

Research questions and friction points this paper is trying to address.

Evaluating superexpressive networks for complex signal representation
Assessing performance in computer vision and scientific machine learning
Comparing network structures for implicit neural representations
Innovation

Methods, ideas, or system contributions that make the work stand out.

Superexpressive networks enhance implicit neural representations
Specialized 3D structure: width, depth, and height
Outperforms nonlinear activation-based neural representations
🔎 Similar Papers
No similar papers found.