🤖 AI Summary
This work proposes ArGEnT, a geometry-aware attention architecture designed to address the challenges of operator learning under complex geometries and parametric physical conditions. By directly embedding point clouds into a DeepONet backbone, ArGEnT explicitly encodes geometric information through self-attention, cross-attention, and hybrid attention mechanisms—eliminating the need for signed distance functions. The framework enables flexible predictions at arbitrary spatial locations and achieves strong generalization across diverse geometries. Evaluated on benchmark tasks spanning fluid dynamics, solid mechanics, and electrochemical systems, ArGEnT significantly outperforms standard DeepONet and other geometry-aware models, demonstrating substantial improvements in both predictive accuracy and generalization capability.
📝 Abstract
Learning solution operators for systems with complex, varying geometries and parametric physical settings is a central challenge in scientific machine learning. In many-query regimes such as design optimization, control and inverse problems, surrogate modeling must generalize across geometries while allowing flexible evaluation at arbitrary spatial locations. In this work, we propose Arbitrary Geometry-encoded Transformer (ArGEnT), a geometry-aware attention-based architecture for operator learning on arbitrary domains. ArGEnT employs Transformer attention mechanisms to encode geometric information directly from point-cloud representations with three variants-self-attention, cross-attention, and hybrid-attention-that incorporates different strategies for incorporating geometric features. By integrating ArGEnT into DeepONet as the trunk network, we develop a surrogate modeling framework capable of learning operator mappings that depend on both geometric and non-geometric inputs without the need to explicitly parametrize geometry as a branch network input. Evaluation on benchmark problems spanning fluid dynamics, solid mechanics and electrochemical systems, we demonstrate significantly improved prediction accuracy and generalization performance compared with the standard DeepONet and other existing geometry-aware saurrogates. In particular, the cross-attention transformer variant enables accurate geometry-conditioned predictions with reduced reliance on signed distance functions. By combining flexible geometry encoding with operator-learning capabilities, ArGEnT provides a scalable surrogate modeling framework for optimization, uncertainty quantification, and data-driven modeling of complex physical systems.