🤖 AI Summary
Implicit Neural Representations (INRs) struggle to capture localized high-frequency physical fields in scientific simulations, while incorporating rigid geometric priors compromises flexibility and inflates model size. To address these limitations, we propose Feature-Adaptive INR (FA-INR). Our method introduces two key innovations: (1) a cross-attention-based dynamic memory bank that enables on-demand capacity allocation, and (2) a coordinate-guided Mixture-of-Experts (MoE) architecture that enhances representation specialization and computational efficiency. Evaluated on three large-scale scientific simulation datasets, FA-INR achieves state-of-the-art fidelity with significantly reduced model size—pushing the Pareto frontier of the accuracy–compactness trade-off. This work establishes a new paradigm for high-fidelity, lightweight surrogate modeling in scientific computing.
📝 Abstract
Effective surrogate models are critical for accelerating scientific simulations. Implicit neural representations (INRs) offer a compact and continuous framework for modeling spatially structured data, but they often struggle with complex scientific fields exhibiting localized, high-frequency variations. Recent approaches address this by introducing additional features along rigid geometric structures (e.g., grids), but at the cost of flexibility and increased model size. In this paper, we propose a simple yet effective alternative: Feature-Adaptive INR (FA-INR). FA-INR leverages cross-attention to an augmented memory bank to learn flexible feature representations, enabling adaptive allocation of model capacity based on data characteristics, rather than rigid structural assumptions. To further improve scalability, we introduce a coordinate-guided mixture of experts (MoE) that enhances the specialization and efficiency of feature representations. Experiments on three large-scale ensemble simulation datasets show that FA-INR achieves state-of-the-art fidelity while significantly reducing model size, establishing a new trade-off frontier between accuracy and compactness for INR-based surrogates.