High-Fidelity Scientific Simulation Surrogates via Adaptive Implicit Neural Representations

📅 2025-06-07
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Implicit Neural Representations (INRs) struggle to capture localized high-frequency physical fields in scientific simulations, while incorporating rigid geometric priors compromises flexibility and inflates model size. To address these limitations, we propose Feature-Adaptive INR (FA-INR). Our method introduces two key innovations: (1) a cross-attention-based dynamic memory bank that enables on-demand capacity allocation, and (2) a coordinate-guided Mixture-of-Experts (MoE) architecture that enhances representation specialization and computational efficiency. Evaluated on three large-scale scientific simulation datasets, FA-INR achieves state-of-the-art fidelity with significantly reduced model size—pushing the Pareto frontier of the accuracy–compactness trade-off. This work establishes a new paradigm for high-fidelity, lightweight surrogate modeling in scientific computing.

Technology Category

Application Category

📝 Abstract
Effective surrogate models are critical for accelerating scientific simulations. Implicit neural representations (INRs) offer a compact and continuous framework for modeling spatially structured data, but they often struggle with complex scientific fields exhibiting localized, high-frequency variations. Recent approaches address this by introducing additional features along rigid geometric structures (e.g., grids), but at the cost of flexibility and increased model size. In this paper, we propose a simple yet effective alternative: Feature-Adaptive INR (FA-INR). FA-INR leverages cross-attention to an augmented memory bank to learn flexible feature representations, enabling adaptive allocation of model capacity based on data characteristics, rather than rigid structural assumptions. To further improve scalability, we introduce a coordinate-guided mixture of experts (MoE) that enhances the specialization and efficiency of feature representations. Experiments on three large-scale ensemble simulation datasets show that FA-INR achieves state-of-the-art fidelity while significantly reducing model size, establishing a new trade-off frontier between accuracy and compactness for INR-based surrogates.
Problem

Research questions and friction points this paper is trying to address.

Surrogates struggle with complex scientific fields' localized variations
Rigid feature structures reduce model flexibility and increase size
Need adaptive capacity allocation for accuracy and compactness
Innovation

Methods, ideas, or system contributions that make the work stand out.

Feature-Adaptive INR for flexible feature representations
Cross-attention to augmented memory bank
Coordinate-guided mixture of experts for scalability
🔎 Similar Papers
No similar papers found.
Z
Ziwei Li
Department of Computer Science and Engineering, The Ohio State University
Y
Yuhan Duan
Department of Computer Science and Engineering, The Ohio State University
T
Tianyu Xiong
Department of Computer Science and Engineering, The Ohio State University
Y
Yi-Tang Chen
Department of Computer Science and Engineering, The Ohio State University
W
Wei-Lun Chao
Department of Computer Science and Engineering, The Ohio State University
Han-Wei Shen
Han-Wei Shen
Program Director, NSF IIS/HCC; EIC, IEEE TVCG; Professor, The Ohio State University
visualizationcomputer graphicsdata analyticsmachine learning