Detail Across Scales: Multi-Scale Enhancement for Full Spectrum Neural Representations

πŸ“… 2025-09-18
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
Existing implicit neural representations (INRs) are constrained by compact network architectures, limiting their ability to model multiscale structures, high-frequency components, and fine-grained textures prevalent in scientific data. To address this, we propose WIEN-INRβ€”a wavelet-guided multiscale INR framework that integrates the discrete wavelet transform (DWT) as a structural prior into network design, enabling hierarchical feature decomposition across resolutions. At the finest scale, a lightweight, dedicated kernel network is introduced to precisely capture subtle textural details. Crucially, WIEN-INR achieves full-spectrum information encoding without increasing model parameter count. Experiments across multiple scientific datasets demonstrate that WIEN-INR significantly improves reconstruction fidelity for high-frequency details and complex structures, while reducing model size by 32%–57%, accelerating training by 1.8×–2.4Γ—, and lowering storage overhead. This establishes a new paradigm for high-fidelity scientific data modeling under resource-constrained conditions.

Technology Category

Application Category

πŸ“ Abstract
Implicit neural representations (INRs) have emerged as a compact and parametric alternative to discrete array-based data representations, encoding information directly in neural network weights to enable resolution-independent representation and memory efficiency. However, existing INR approaches, when constrained to compact network sizes, struggle to faithfully represent the multi-scale structures, high-frequency information, and fine textures that characterize the majority of scientific datasets. To address this limitation, we propose WIEN-INR, a wavelet-informed implicit neural representation that distributes modeling across different resolution scales and employs a specialized kernel network at the finest scale to recover subtle details. This multi-scale architecture allows for the use of smaller networks to retain the full spectrum of information while preserving the training efficiency and reducing storage cost. Through extensive experiments on diverse scientific datasets spanning different scales and structural complexities, WIEN-INR achieves superior reconstruction fidelity while maintaining a compact model size. These results demonstrate WIEN-INR as a practical neural representation framework for high-fidelity scientific data encoding, extending the applicability of INRs to domains where efficient preservation of fine detail is essential.
Problem

Research questions and friction points this paper is trying to address.

Enhancing multi-scale detail representation in neural networks
Addressing high-frequency information loss in compact INRs
Improving fine texture preservation for scientific datasets
Innovation

Methods, ideas, or system contributions that make the work stand out.

Wavelet-informed multi-scale neural representation
Specialized kernel network for fine details
Compact networks preserving full spectrum information
πŸ”Ž Similar Papers
No similar papers found.
Y
Yuan Ni
SLAC National Accelerator Laboratory, Menlo Park, CA 94025, USA.; Stanford Institute for Materials and Energy Sciences, Stanford University, Stanford, CA 94305, USA.
Zhantao Chen
Zhantao Chen
Assistant Professor, UT Austin; previously at SLAC and MIT
AImaterials sciencecomputational methodsexperimental designscattering methods
C
Cheng Peng
Stanford Institute for Materials and Energy Sciences, Stanford University, Stanford, CA 94305, USA.
R
Rajan Plumley
SLAC National Accelerator Laboratory, Menlo Park, CA 94025, USA.; Stanford Institute for Materials and Energy Sciences, Stanford University, Stanford, CA 94305, USA.; Department of Physics, Carnegie Mellon University, Pittsburgh, PA 15213, USA.
C
Chun Hong Yoon
SLAC National Accelerator Laboratory, Menlo Park, CA 94025, USA.
Jana B. Thayer
Jana B. Thayer
SLAC National Accelerator Laboratory
Joshua J. Turner
Joshua J. Turner
SLAC National Accelerator Laboratory