🤖 AI Summary
Accurately and efficiently predicting fluid flow fields over complex geometries remains challenging, particularly under data-scarce and out-of-distribution (OOD) generalization scenarios. Method: This work systematically benchmarks neural operators (Fourier Neural Operator and DeepONet) against vision Transformers for flow prediction, introducing a unified evaluation framework that jointly quantifies global accuracy, boundary-layer fidelity, and physical consistency. It establishes a geometric representation benchmark comparing signed distance fields (SDF) and binary masks. Contribution/Results: Vision Transformers reduce prediction error by 37% over neural operators under limited training data; SDF representations improve data efficiency, lowering mean relative error by 22% compared to binary masks. However, all models exhibit substantial performance degradation on unseen geometries. This study presents the first comprehensive, reproducible benchmarking suite for foundation models in engineering flow prediction—establishing standardized evaluation protocols and design principles for geometry-aware scientific machine learning.
📝 Abstract
Rapid yet accurate simulations of fluid dynamics around complex geometries is critical in a variety of engineering and scientific applications, including aerodynamics and biomedical flows. However, while scientific machine learning (SciML) has shown promise, most studies are constrained to simple geometries, leaving complex, real-world scenarios underexplored. This study addresses this gap by benchmarking diverse SciML models, including neural operators and vision transformer-based foundation models, for fluid flow prediction over intricate geometries. Using a high-fidelity dataset of steady-state flows across various geometries, we evaluate the impact of geometric representations -- Signed Distance Fields (SDF) and binary masks -- on model accuracy, scalability, and generalization. Central to this effort is the introduction of a novel, unified scoring framework that integrates metrics for global accuracy, boundary layer fidelity, and physical consistency to enable a robust, comparative evaluation of model performance. Our findings demonstrate that foundation models significantly outperform neural operators, particularly in data-limited scenarios, and that SDF representations yield superior results with sufficient training data. Despite these advancements, all models struggle with out-of-distribution generalization, highlighting a critical challenge for future SciML applications. By advancing both evaluation methodologies and modeling capabilities, this work paves the way for robust and scalable ML solutions for fluid dynamics across complex geometries.