🤖 AI Summary
This work addresses the limited geometric representational capacity in learning partial differential equation (PDE) operators on complex geometries. To this end, we systematically integrate optimal transport (OT) into neural operator frameworks. Our core method generalizes discrete grids to grid density functions and models them via instance-dependent, differentiable OT maps from each density to a uniform reference density—yielding geometry-aware embeddings. Subsequently, 3D surfaces are parameterized onto a 2D latent manifold, where PDE operator learning is performed directly. To our knowledge, this is the first systematic application of OT for geometry-aware operator learning. Evaluated on ShapeNet-Car, DrivAerNet-Car, and FlowBench, our model achieves significantly higher accuracy and superior geometric generalization while incurring substantially lower time and memory overhead compared to state-of-the-art methods.
📝 Abstract
We propose integrating optimal transport (OT) into operator learning for partial differential equations (PDEs) on complex geometries. Classical geometric learning methods typically represent domains as meshes, graphs, or point clouds. Our approach generalizes discretized meshes to mesh density functions, formulating geometry embedding as an OT problem that maps these functions to a uniform density in a reference space. Compared to previous methods relying on interpolation or shared deformation, our OT-based method employs instance-dependent deformation, offering enhanced flexibility and effectiveness. For 3D simulations focused on surfaces, our OT-based neural operator embeds the surface geometry into a 2D parameterized latent space. By performing computations directly on this 2D representation of the surface manifold, it achieves significant computational efficiency gains compared to volumetric simulation. Experiments with Reynolds-averaged Navier-Stokes equations (RANS) on the ShapeNet-Car and DrivAerNet-Car datasets show that our method achieves better accuracy and also reduces computational expenses in terms of both time and memory usage compared to existing machine learning models. Additionally, our model demonstrates significantly improved accuracy on the FlowBench dataset, underscoring the benefits of employing instance-dependent deformation for datasets with highly variable geometries.