π€ AI Summary
Existing Transformer-based neural operators struggle to simultaneously capture long-range dependencies and local dynamics when modeling partial differential equations (PDEs) on complex geometries and unstructured meshes.
Method: We propose a geometry-aware dynamic K-nearest-neighbor local patching scheme coupled with a globalβlocal collaborative attention mechanism. This integrates linear attention (for efficient global context modeling) and pairwise attention (for fine-grained local interaction modeling), augmented by a geometric-distance-driven dynamic patching strategy that eliminates reliance on uniform grids.
Contribution/Results: Evaluated on six benchmark PDE datasets, our method reduces prediction error by over 50% on average compared to state-of-the-art linear attention approaches, and outperforms full pairwise attention baselines even under optimal configurations. To the best of our knowledge, this is the first work enabling high-accuracy, data-driven, unstructured-mesh-adaptive modeling of general geometric PDEs.
π Abstract
Neural operators have emerged as promising frameworks for learning mappings governed by partial differential equations (PDEs), serving as data-driven alternatives to traditional numerical methods. While methods such as the Fourier neural operator (FNO) have demonstrated notable performance, their reliance on uniform grids restricts their applicability to complex geometries and irregular meshes. Recently, Transformer-based neural operators with linear attention mechanisms have shown potential in overcoming these limitations for large-scale PDE simulations. However, these approaches predominantly emphasize global feature aggregation, often overlooking fine-scale dynamics and localized PDE behaviors essential for accurate solutions. To address these challenges, we propose the Locality-Aware Attention Transformer (LA2Former), which leverages K-nearest neighbors for dynamic patchifying and integrates global-local attention for enhanced PDE modeling. By combining linear attention for efficient global context encoding with pairwise attention for capturing intricate local interactions, LA2Former achieves an optimal balance between computational efficiency and predictive accuracy. Extensive evaluations across six benchmark datasets demonstrate that LA2Former improves predictive accuracy by over 50% relative to existing linear attention methods, while also outperforming full pairwise attention under optimal conditions. This work underscores the critical importance of localized feature learning in advancing Transformer-based neural operators for solving PDEs on complex and irregular domains.