Split-Layer: Enhancing Implicit Neural Representation by Maximizing the Dimensionality of Feature Space

📅 2025-11-13
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Implicit Neural Representations (INRs) are fundamentally limited by the low-dimensional feature space of standard MLPs, hindering efficient modeling of complex continuous signals. To address this, we propose Split-Layer: a novel architectural primitive that decomposes a single MLP layer into multiple parallel subnetworks and fuses their outputs via Hadamard product. This design exponentially expands the effective feature-space dimensionality without incurring significant parameter or computational overhead, thereby overcoming classical MLP capacity bottlenecks while preserving differentiability and optimization friendliness. Extensive experiments on 2D image fitting, CT reconstruction, 3D shape modeling, and 5D novel-view synthesis demonstrate substantial improvements in reconstruction accuracy and convergence speed over state-of-the-art INR baselines. Our core contribution is the first integration of branch-parallel structure with element-wise nonlinear fusion into INR architectures, establishing a lightweight, efficient, and scalable paradigm for high-dimensional continuous signal representation.

Technology Category

Application Category

📝 Abstract
Implicit neural representation (INR) models signals as continuous functions using neural networks, offering efficient and differentiable optimization for inverse problems across diverse disciplines. However, the representational capacity of INR defined by the range of functions the neural network can characterize, is inherently limited by the low-dimensional feature space in conventional multilayer perceptron (MLP) architectures. While widening the MLP can linearly increase feature space dimensionality, it also leads to a quadratic growth in computational and memory costs. To address this limitation, we propose the split-layer, a novel reformulation of MLP construction. The split-layer divides each layer into multiple parallel branches and integrates their outputs via Hadamard product, effectively constructing a high-degree polynomial space. This approach significantly enhances INR's representational capacity by expanding the feature space dimensionality without incurring prohibitive computational overhead. Extensive experiments demonstrate that the split-layer substantially improves INR performance, surpassing existing methods across multiple tasks, including 2D image fitting, 2D CT reconstruction, 3D shape representation, and 5D novel view synthesis.
Problem

Research questions and friction points this paper is trying to address.

Enhancing implicit neural representation capacity through feature space expansion
Overcoming computational limitations of traditional MLP architectures in INR
Improving performance across 2D/3D reconstruction and novel view synthesis tasks
Innovation

Methods, ideas, or system contributions that make the work stand out.

Split-layer divides MLP into parallel branches
Uses Hadamard product to integrate branch outputs
Expands feature space dimensionality efficiently
🔎 Similar Papers
No similar papers found.