🤖 AI Summary
To address high-fidelity surrogate modeling for complex irregular geometries and strongly nonlinear physical states, this paper proposes the Multi-scale Physics-Aware Attention Transformer (MPA-Transformer), unifying geometric sensitivity and physical consistency. The core innovation is the Geometric-Aware Local Embedding (GALE) attention mechanism, which jointly integrates self-attention over physics-state slices with geometry- and boundary-aware cross-attention based on multi-scale spherical queries. Additionally, a joint embedding projection maps geometric and operating-condition parameters into the physical state space, enabling implicit computational anchoring. Integrating a Convolutional Autoencoder (CAE) backbone with the PhysicsNeMo framework, MPA-Transformer achieves state-of-the-art performance on benchmarks such as DrivAerML—outperforming Domino, Transolver, and AB-UPT in drag and lift prediction (R² improvement >5%), demonstrating enhanced generalization across unseen geometries and operating conditions, and reducing data requirements by over 30%.
📝 Abstract
We present GeoTransolver, a Multiscale Geometry-Aware Physics Attention Transformer for CAE that replaces standard attention with GALE, coupling physics-aware self-attention on learned state slices with cross-attention to a shared geometry/global/boundary-condition context computed from multi-scale ball queries (inspired by DoMINO) and reused in every block. Implemented and released in NVIDIA PhysicsNeMo, GeoTransolver persistently projects geometry, global and boundary condition parameters into physical state spaces to anchor latent computations to domain structure and operating regimes. We benchmark GeoTransolver on DrivAerML, Luminary SHIFT-SUV, and Luminary SHIFT-Wing, comparing against Domino, Transolver (as released in PhysicsNeMo), and literature-reported AB-UPT, and evaluate drag/lift R2 and Relative L1 errors for field variables. GeoTransolver delivers better accuracy, improved robustness to geometry/regime shifts, and favorable data efficiency; we include ablations on DrivAerML and qualitative results such as contour plots and design trends for the best GeoTransolver models. By unifying multiscale geometry-aware context with physics-based attention in a scalable transformer, GeoTransolver advances operator learning for high-fidelity surrogate modeling across complex, irregular domains and non-linear physical regimes.