Optimizing Rank for High-Fidelity Implicit Neural Representations

📅 2025-12-16
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Implicit Neural Representations (INRs) based on MLPs exhibit limited high-frequency reconstruction capability, conventionally attributed to an intrinsic spectral bias; however, this work identifies weight matrix rank degeneration during training—not architectural limitations—as the fundamental bottleneck. Method: We systematically establish rank degeneration as the critical mechanism impeding high-frequency signal representation and propose a high-rank optimization strategy using optimizers such as Muon, which preserves high-rank, near-orthogonal weight updates without modifying the network architecture. Contribution/Results: Our approach requires only standard ReLU MLPs—no coordinate encoding or specialized activation functions—and achieves up to 9 dB PSNR improvement over state-of-the-art methods across natural images, medical imaging, and novel view synthesis tasks, demonstrating superior signal fidelity and generalizability.

Technology Category

Application Category

📝 Abstract
Implicit Neural Representations (INRs) based on vanilla Multi-Layer Perceptrons (MLPs) are widely believed to be incapable of representing high-frequency content. This has directed research efforts towards architectural interventions, such as coordinate embeddings or specialized activation functions, to represent high-frequency signals. In this paper, we challenge the notion that the low-frequency bias of vanilla MLPs is an intrinsic, architectural limitation to learn high-frequency content, but instead a symptom of stable rank degradation during training. We empirically demonstrate that regulating the network's rank during training substantially improves the fidelity of the learned signal, rendering even simple MLP architectures expressive. Extensive experiments show that using optimizers like Muon, with high-rank, near-orthogonal updates, consistently enhances INR architectures even beyond simple ReLU MLPs. These substantial improvements hold across a diverse range of domains, including natural and medical images, and novel view synthesis, with up to 9 dB PSNR improvements over the previous state-of-the-art. Our project page, which includes code and experimental results, is available at: (https://muon-inrs.github.io).
Problem

Research questions and friction points this paper is trying to address.

Challenges low-frequency bias in MLPs for high-fidelity signals
Addresses rank degradation during training to improve INR performance
Enhances implicit neural representations across diverse domains
Innovation

Methods, ideas, or system contributions that make the work stand out.

Regulating network rank during training improves fidelity
Using high-rank near-orthogonal optimizers like Muon enhances architectures
Simple MLPs become expressive with rank control across domains
🔎 Similar Papers
No similar papers found.
Julian McGinnis
Julian McGinnis
Technical University of Munich
Machine LearningGraph LearningMedical Imaging
F
Florian A. Hölzl
Hasso-Plattner-Institute, University of Potsdam, Germany
Suprosanna Shit
Suprosanna Shit
University of Zurich | ETH AI Center
Machine LearningMedical ImagingComputer VisionSignal Processing
Florentin Bieder
Florentin Bieder
University of Basel
P
Paul Friedrich
Center for medical Image Analysis and Navigation, University of Basel, Switzerland
M
Mark Mühlau
Department of Neurology, School of Medicine and Health, Technical University of Munich, Germany
B
Björn Menze
Department of Biomedical Engineering, University of Zurich, Switzerland
Daniel Rueckert
Daniel Rueckert
Technical University of Munich and Imperial College London
Machine LearningMedical Image ComputingBiomedical Image AnalysisComputer Vision
B
Benedikt Wiestler
Chair for AI for Image-Guided Diagnosis and Therapy, Technical University of Munich, Germany