GeoTransolver: Learning Physics on Irregumar Domains Using Multi-scale Geometry Aware Physics Attention Transformer

📅 2025-12-23
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address high-fidelity surrogate modeling for complex irregular geometries and strongly nonlinear physical states, this paper proposes the Multi-scale Physics-Aware Attention Transformer (MPA-Transformer), unifying geometric sensitivity and physical consistency. The core innovation is the Geometric-Aware Local Embedding (GALE) attention mechanism, which jointly integrates self-attention over physics-state slices with geometry- and boundary-aware cross-attention based on multi-scale spherical queries. Additionally, a joint embedding projection maps geometric and operating-condition parameters into the physical state space, enabling implicit computational anchoring. Integrating a Convolutional Autoencoder (CAE) backbone with the PhysicsNeMo framework, MPA-Transformer achieves state-of-the-art performance on benchmarks such as DrivAerML—outperforming Domino, Transolver, and AB-UPT in drag and lift prediction (R² improvement >5%), demonstrating enhanced generalization across unseen geometries and operating conditions, and reducing data requirements by over 30%.

Technology Category

Application Category

📝 Abstract
We present GeoTransolver, a Multiscale Geometry-Aware Physics Attention Transformer for CAE that replaces standard attention with GALE, coupling physics-aware self-attention on learned state slices with cross-attention to a shared geometry/global/boundary-condition context computed from multi-scale ball queries (inspired by DoMINO) and reused in every block. Implemented and released in NVIDIA PhysicsNeMo, GeoTransolver persistently projects geometry, global and boundary condition parameters into physical state spaces to anchor latent computations to domain structure and operating regimes. We benchmark GeoTransolver on DrivAerML, Luminary SHIFT-SUV, and Luminary SHIFT-Wing, comparing against Domino, Transolver (as released in PhysicsNeMo), and literature-reported AB-UPT, and evaluate drag/lift R2 and Relative L1 errors for field variables. GeoTransolver delivers better accuracy, improved robustness to geometry/regime shifts, and favorable data efficiency; we include ablations on DrivAerML and qualitative results such as contour plots and design trends for the best GeoTransolver models. By unifying multiscale geometry-aware context with physics-based attention in a scalable transformer, GeoTransolver advances operator learning for high-fidelity surrogate modeling across complex, irregular domains and non-linear physical regimes.
Problem

Research questions and friction points this paper is trying to address.

Develops a transformer model for physics simulation on irregular domains
Improves accuracy and robustness in computational fluid dynamics
Enables efficient surrogate modeling for complex nonlinear physical regimes
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses multi-scale geometry-aware physics attention transformer
Replaces standard attention with GALE for physics coupling
Projects geometry into state spaces for domain anchoring
🔎 Similar Papers
No similar papers found.
C
Corey Adams
NVIDIA Corporation, San Jose, California, USA
Rishikesh Ranade
Rishikesh Ranade
Senior Engineer - Physics ML, NVIDIA
Machine LearningComputational Science
R
Ram Cherukuri
NVIDIA Corporation, San Jose, California, USA
S
Sanjay Choudhry
NVIDIA Corporation, San Jose, California, USA