Refine Now, Query Fast: A Decoupled Refinement Paradigm for Implicit Neural Fields

📅 2026-02-16
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Implicit neural fields in 3D scientific simulation struggle to simultaneously achieve high fidelity and fast inference: deep MLPs offer strong expressivity but are computationally expensive, whereas embedded models are efficient yet limited in capacity. To address this trade-off, this work proposes a Decoupled Representation Refinement (DRR) architecture that leverages a deep refinement network and non-parametric transformations during an offline phase to compress high-capacity representations into compact embeddings, thereby decoupling expressiveness from inference efficiency. Additionally, a Variational Pairs data augmentation strategy is introduced to enhance representation performance on complex tasks. Experiments demonstrate that the proposed method achieves state-of-the-art fidelity across multiple ensemble simulation datasets, with inference speeds up to 27× faster than high-fidelity baselines while matching the efficiency of the fastest existing models.

Technology Category

Application Category

📝 Abstract
Implicit Neural Representations (INRs) have emerged as promising surrogates for large 3D scientific simulations due to their ability to continuously model spatial and conditional fields, yet they face a critical fidelity-speed dilemma: deep MLPs suffer from high inference cost, while efficient embedding-based models lack sufficient expressiveness. To resolve this, we propose the Decoupled Representation Refinement (DRR) architectural paradigm. DRR leverages a deep refiner network, alongside non-parametric transformations, in a one-time offline process to encode rich representations into a compact and efficient embedding structure. This approach decouples slow neural networks with high representational capacity from the fast inference path. We introduce DRR-Net, a simple network that validates this paradigm, and a novel data augmentation strategy, Variational Pairs (VP) for improving INRs under complex tasks like high-dimensional surrogate modeling. Experiments on several ensemble simulation datasets demonstrate that our approach achieves state-of-the-art fidelity, while being up to 27$\times$ faster at inference than high-fidelity baselines and remaining competitive with the fastest models. The DRR paradigm offers an effective strategy for building powerful and practical neural field surrogates and \rev{INRs in broader applications}, with a minimal compromise between speed and quality.
Problem

Research questions and friction points this paper is trying to address.

Implicit Neural Representations
fidelity-speed trade-off
3D scientific simulations
neural field surrogates
Innovation

Methods, ideas, or system contributions that make the work stand out.

Decoupled Representation Refinement
Implicit Neural Representations
Neural Fields
Variational Pairs
Surrogate Modeling
🔎 Similar Papers
No similar papers found.