π€ AI Summary
Neural physics simulators exhibit high sensitivity to input mesh topology variations, resulting in poor cross-topology generalization and significant fluctuations in simulation accuracy. This work identifies mesh topology variation as a fundamental bottleneck undermining the robustness of neural simulatorsβa finding established for the first time through systematic analysis. To address this, we propose a novel pretraining paradigm based on graph autoencoders that learns physics-consistent, topology-invariant mesh embeddings via unsupervised learning, effectively decoupling topological structure from dynamical representations. Our method integrates graph neural networks, physics-driven embedding, and topology-invariant representation learning. Extensive experiments across diverse deformation scenarios demonstrate its effectiveness: after pretraining, simulation accuracy standard deviation on heterogeneous-topology meshes decreases by over 40%, markedly improving cross-topology generalization and stability. This establishes a scalable foundational framework for robust and transferable neural physics simulation.
π Abstract
Meshes are used to represent complex objects in high fidelity physics simulators across a variety of domains, such as radar sensing and aerodynamics. There is growing interest in using neural networks to accelerate physics simulations, and also a growing body of work on applying neural networks directly to irregular mesh data. Since multiple mesh topologies can represent the same object, mesh augmentation is typically required to handle topological variation when training neural networks. Due to the sensitivity of physics simulators to small changes in mesh shape, it is challenging to use these augmentations when training neural network-based physics simulators. In this work, we show that variations in mesh topology can significantly reduce the performance of neural network simulators. We evaluate whether pretraining can be used to address this issue, and find that employing an established autoencoder pretraining technique with graph embedding models reduces the sensitivity of neural network simulators to variations in mesh topology. Finally, we highlight future research directions that may further reduce neural simulator sensitivity to mesh topology.