🤖 AI Summary
Implicit neural fields in 3D scientific simulation struggle to simultaneously achieve high fidelity and fast inference: deep MLPs offer strong expressivity but are computationally expensive, whereas embedded models are efficient yet limited in capacity. To address this trade-off, this work proposes a Decoupled Representation Refinement (DRR) architecture that leverages a deep refinement network and non-parametric transformations during an offline phase to compress high-capacity representations into compact embeddings, thereby decoupling expressiveness from inference efficiency. Additionally, a Variational Pairs data augmentation strategy is introduced to enhance representation performance on complex tasks. Experiments demonstrate that the proposed method achieves state-of-the-art fidelity across multiple ensemble simulation datasets, with inference speeds up to 27× faster than high-fidelity baselines while matching the efficiency of the fastest existing models.
📝 Abstract
Implicit Neural Representations (INRs) have emerged as promising surrogates for large 3D scientific simulations due to their ability to continuously model spatial and conditional fields, yet they face a critical fidelity-speed dilemma: deep MLPs suffer from high inference cost, while efficient embedding-based models lack sufficient expressiveness. To resolve this, we propose the Decoupled Representation Refinement (DRR) architectural paradigm. DRR leverages a deep refiner network, alongside non-parametric transformations, in a one-time offline process to encode rich representations into a compact and efficient embedding structure. This approach decouples slow neural networks with high representational capacity from the fast inference path. We introduce DRR-Net, a simple network that validates this paradigm, and a novel data augmentation strategy, Variational Pairs (VP) for improving INRs under complex tasks like high-dimensional surrogate modeling. Experiments on several ensemble simulation datasets demonstrate that our approach achieves state-of-the-art fidelity, while being up to 27$\times$ faster at inference than high-fidelity baselines and remaining competitive with the fastest models. The DRR paradigm offers an effective strategy for building powerful and practical neural field surrogates and \rev{INRs in broader applications}, with a minimal compromise between speed and quality.