Where Do We Stand with Implicit Neural Representations? A Technical and Performance Survey

📅 2024-11-06
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This paper systematically surveys implicit neural representations (INRs), identifying key performance bottlenecks—particularly limited expressivity and scalability—in inverse problems such as audio synthesis, image reconstruction, 3D scene modeling, and high-dimensional data generation. To address these challenges, we propose the first unified four-dimensional taxonomy—spanning activation functions, positional encodings, joint encoding strategies, and network architectures—and uncover a fundamental trade-off between local bias suppression and fine-grained detail modeling. Through cross-modal benchmarking and gradient-guided structural optimization, we quantitatively evaluate reconstruction fidelity, memory efficiency, and generalization capability. Our contributions include: (i) an open, reproducible experimental benchmark; (ii) identification of three critical research frontiers—activation expressivity, positional encoding robustness, and high-dimensional scalability; and (iii) theoretically grounded, practice-oriented guidance for INR method selection and future development.

Technology Category

Application Category

📝 Abstract
Implicit Neural Representations (INRs) have emerged as a paradigm in knowledge representation, offering exceptional flexibility and performance across a diverse range of applications. INRs leverage multilayer perceptrons (MLPs) to model data as continuous implicit functions, providing critical advantages such as resolution independence, memory efficiency, and generalisation beyond discretised data structures. Their ability to solve complex inverse problems makes them particularly effective for tasks including audio reconstruction, image representation, 3D object reconstruction, and high-dimensional data synthesis. This survey provides a comprehensive review of state-of-the-art INR methods, introducing a clear taxonomy that categorises them into four key areas: activation functions, position encoding, combined strategies, and network structure optimisation. We rigorously analyse their critical properties, such as full differentiability, smoothness, compactness, and adaptability to varying resolutions while also examining their strengths and limitations in addressing locality biases and capturing fine details. Our experimental comparison offers new insights into the trade-offs between different approaches, showcasing the capabilities and challenges of the latest INR techniques across various tasks. In addition to identifying areas where current methods excel, we highlight key limitations and potential avenues for improvement, such as developing more expressive activation functions, enhancing positional encoding mechanisms, and improving scalability for complex, high-dimensional data. This survey serves as a roadmap for researchers, offering practical guidance for future exploration in the field of INRs. We aim to foster new methodologies by outlining promising research directions for INRs and applications.
Problem

Research questions and friction points this paper is trying to address.

Evaluates Implicit Neural Representations
Analyzes INR methods taxonomy
Identifies INR limitations improvements
Innovation

Methods, ideas, or system contributions that make the work stand out.

Utilizes multilayer perceptrons (MLPs)
Models data as continuous functions
Categorizes methods into four key areas
🔎 Similar Papers
No similar papers found.
A
Amer Essakine
Department of Applied Mathematics and Theoretical Physics, University of Cambridge; ENS Paris-Saclay
Yanqi Cheng
Yanqi Cheng
University of Cambridge
Chun-Wun Cheng
Chun-Wun Cheng
PhD student, University of Cambridge
Implicit Deep LearningApplied MathematicsGenerative AI
L
Lipei Zhang
Department of Applied Mathematics and Theoretical Physics, University of Cambridge
Zhongying Deng
Zhongying Deng
University of Cambridge
Deep LearningMulti-modal LearningComputer VisionMedical Image Analysis
L
Lei Zhu
ROAS & DSA, The Hong Kong University of Science and Technology (Guangzhou)
C
C. Schönlieb
Department of Applied Mathematics and Theoretical Physics, University of Cambridge
A
Angelica I. Avilés-Rivero
Yau Mathematical Sciences Center, Tsinghua University; Department of Applied Mathematics and Theoretical Physics, University of Cambridge