Sampling Theory for Super-Resolution with Implicit Neural Representations

📅 2025-06-11
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work investigates the Fourier sampling theory of implicit neural representations (INRs) for continuous-domain super-resolution reconstruction, addressing the open problem of determining the minimal number of low-frequency Fourier measurements required for exact image recovery. Methodologically, it establishes a rigorous equivalence between training a single-layer ReLU network with Fourier features and convex optimization over the space of Radon measures, incorporating generalized weight decay regularization and nonconvex parameter optimization. This equivalence enables derivation of a provable lower bound on the Fourier sampling rate sufficient for exact reconstruction. Theoretically, the bound scales linearly with the intrinsic dimension of the image—not its pixel count—thereby breaking the curse of dimensionality. Empirical validation on synthetic phantom images demonstrates high-probability exact super-resolution recovery. To the best of our knowledge, this is the first work to provide a verifiable, theoretically grounded characterization of sampling complexity for INR-based inverse problems.

Technology Category

Application Category

📝 Abstract
Implicit neural representations (INRs) have emerged as a powerful tool for solving inverse problems in computer vision and computational imaging. INRs represent images as continuous domain functions realized by a neural network taking spatial coordinates as inputs. However, unlike traditional pixel representations, little is known about the sample complexity of estimating images using INRs in the context of linear inverse problems. Towards this end, we study the sampling requirements for recovery of a continuous domain image from its low-pass Fourier samples by fitting a single hidden-layer INR with ReLU activation and a Fourier features layer using a generalized form of weight decay regularization. Our key insight is to relate minimizers of this non-convex parameter space optimization problem to minimizers of a convex penalty defined over an infinite-dimensional space of measures. We identify a sufficient number of Fourier samples for which an image realized by an INR is exactly recoverable by solving the INR training problem. To validate our theory, we empirically assess the probability of achieving exact recovery of images realized by low-width single hidden-layer INRs, and illustrate the performance of INRs on super-resolution recovery of continuous domain phantom images.
Problem

Research questions and friction points this paper is trying to address.

Determine sampling requirements for image recovery using INRs
Study exact recoverability of images from Fourier samples
Validate INR performance on super-resolution of phantom images
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses implicit neural representations for super-resolution
Applies Fourier features layer with ReLU activation
Employs generalized weight decay regularization
🔎 Similar Papers
No similar papers found.
Mahrokh Najaf
Mahrokh Najaf
Marquette University
Machine LearningMedical Image ReconstructionSignal ProcessingOptimization
G
Gregory Ongie
Department of Mathematical and Statistical Sciences, Marquette University, Milwaukee, WI, USA