🤖 AI Summary
Existing neural surrogate models are predominantly evaluated on simplified, low-dimensional problems, failing to expose their fragility in realistic multiphysics fluid flows. Method: This paper introduces REALM—a rigorous, application-driven benchmark framework comprising 11 high-fidelity spatiotemporal flow tasks (e.g., propulsion and fire safety)—evaluated under a unified training and testing protocol across十余 architectures (spectral operators, graph/grid networks, Transformers, etc.). Contribution/Results: REALM uncovers three fundamental limitations: (i) scale barriers arising from coupled dimensionality, stiffness, and mesh irregularity; (ii) performance governed by inductive bias rather than parameter count; and (iii) a significant gap between nominal accuracy and physical conservation behavior. It reveals consistent failures—including abrupt rollout error growth, loss of transient structures, and severe degradation in integral quantities. REALM establishes the first reproducible, quantitative, multiphysics benchmark for physics-aware surrogate modeling.
📝 Abstract
Predicting multiphysics dynamics is computationally expensive and challenging due to the severe coupling of multi-scale, heterogeneous physical processes. While neural surrogates promise a paradigm shift, the field currently suffers from an "illusion of mastery", as repeatedly emphasized in top-tier commentaries: existing evaluations overly rely on simplified, low-dimensional proxies, which fail to expose the models' inherent fragility in realistic regimes. To bridge this critical gap, we present REALM (REalistic AI Learning for Multiphysics), a rigorous benchmarking framework designed to test neural surrogates on challenging, application-driven reactive flows. REALM features 11 high-fidelity datasets spanning from canonical multiphysics problems to complex propulsion and fire safety scenarios, alongside a standardized end-to-end training and evaluation protocol that incorporates multiphysics-aware preprocessing and a robust rollout strategy. Using this framework, we systematically benchmark over a dozen representative surrogate model families, including spectral operators, convolutional models, Transformers, pointwise operators, and graph/mesh networks, and identify three robust trends: (i) a scaling barrier governed jointly by dimensionality, stiffness, and mesh irregularity, leading to rapidly growing rollout errors; (ii) performance primarily controlled by architectural inductive biases rather than parameter count; and (iii) a persistent gap between nominal accuracy metrics and physically trustworthy behavior, where models with high correlations still miss key transient structures and integral quantities. Taken together, REALM exposes the limits of current neural surrogates on realistic multiphysics flows and offers a rigorous testbed to drive the development of next-generation physics-aware architectures.