π€ AI Summary
Existing implicit neural representations (INRs) are constrained by compact network architectures, limiting their ability to model multiscale structures, high-frequency components, and fine-grained textures prevalent in scientific data. To address this, we propose WIEN-INRβa wavelet-guided multiscale INR framework that integrates the discrete wavelet transform (DWT) as a structural prior into network design, enabling hierarchical feature decomposition across resolutions. At the finest scale, a lightweight, dedicated kernel network is introduced to precisely capture subtle textural details. Crucially, WIEN-INR achieves full-spectrum information encoding without increasing model parameter count. Experiments across multiple scientific datasets demonstrate that WIEN-INR significantly improves reconstruction fidelity for high-frequency details and complex structures, while reducing model size by 32%β57%, accelerating training by 1.8Γβ2.4Γ, and lowering storage overhead. This establishes a new paradigm for high-fidelity scientific data modeling under resource-constrained conditions.
π Abstract
Implicit neural representations (INRs) have emerged as a compact and parametric alternative to discrete array-based data representations, encoding information directly in neural network weights to enable resolution-independent representation and memory efficiency. However, existing INR approaches, when constrained to compact network sizes, struggle to faithfully represent the multi-scale structures, high-frequency information, and fine textures that characterize the majority of scientific datasets. To address this limitation, we propose WIEN-INR, a wavelet-informed implicit neural representation that distributes modeling across different resolution scales and employs a specialized kernel network at the finest scale to recover subtle details. This multi-scale architecture allows for the use of smaller networks to retain the full spectrum of information while preserving the training efficiency and reducing storage cost. Through extensive experiments on diverse scientific datasets spanning different scales and structural complexities, WIEN-INR achieves superior reconstruction fidelity while maintaining a compact model size. These results demonstrate WIEN-INR as a practical neural representation framework for high-fidelity scientific data encoding, extending the applicability of INRs to domains where efficient preservation of fine detail is essential.