INR-Bench: A Unified Benchmark for Implicit Neural Representations in Multi-Domain Regression and Reconstruction

📅 2025-10-11
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing studies lack a systematic analysis of frequency adaptation mechanisms in implicit neural representations (INRs) for multi-task learning. Method: We introduce INR-Bench—the first unified benchmark for multi-domain signal processing—comprising 56 Coordinate-MLP and 22 Coordinate-KAN architectures, integrated with 4 positional encoding schemes, 14 activation functions, and multiple basis functions, evaluated across 9 forward/inverse multimodal tasks. Leveraging Neural Tangent Kernel (NTK) theory, we quantitatively characterize how network architecture, positional encoding, and nonlinear units jointly govern frequency response. Contribution/Results: INR-Bench establishes the first comprehensive INR analysis framework spanning diverse architectures, activations, and basis functions. It provides an open-source implementation with standardized datasets and evaluation protocols, enabling reproducible, scalable, and principled research on INRs. The benchmark reveals fundamental trade-offs in spectral bias and generalization across architectural choices, offering actionable insights for designing frequency-aware implicit models.

Technology Category

Application Category

📝 Abstract
Implicit Neural Representations (INRs) have gained success in various signal processing tasks due to their advantages of continuity and infinite resolution. However, the factors influencing their effectiveness and limitations remain underexplored. To better understand these factors, we leverage insights from Neural Tangent Kernel (NTK) theory to analyze how model architectures (classic MLP and emerging KAN), positional encoding, and nonlinear primitives affect the response to signals of varying frequencies. Building on this analysis, we introduce INR-Bench, the first comprehensive benchmark specifically designed for multimodal INR tasks. It includes 56 variants of Coordinate-MLP models (featuring 4 types of positional encoding and 14 activation functions) and 22 Coordinate-KAN models with distinct basis functions, evaluated across 9 implicit multimodal tasks. These tasks cover both forward and inverse problems, offering a robust platform to highlight the strengths and limitations of different neural models, thereby establishing a solid foundation for future research. The code and dataset are available at https://github.com/lif314/INR-Bench.
Problem

Research questions and friction points this paper is trying to address.

Analyzing how model architectures affect signal frequency response
Evaluating 78 neural models across 9 multimodal implicit tasks
Establishing benchmark for implicit neural representation limitations and strengths
Innovation

Methods, ideas, or system contributions that make the work stand out.

Analyzes INR performance using Neural Tangent Kernel theory
Introduces INR-Bench benchmark with 78 model variants
Evaluates models across 9 multimodal forward and inverse tasks
🔎 Similar Papers
No similar papers found.
Linfei Li
Linfei Li
Phd Student, Tongji University
Computer VisionRobot Learning
F
Fengyi Zhang
School of Electrical Engineering and Computer Science, The University of Queensland, Brisbane, Australia
Z
Zhong Wang
Department of Automation, Shanghai Jiao Tong University, Shanghai, China
L
Lin Zhang
School of Computer Science and Technology, Tongji University, Shanghai, China
Y
Ying Shen
School of Computer Science and Technology, Tongji University, Shanghai, China