🤖 AI Summary
Graph neural networks often struggle to accurately model natural smoothing processes—such as diffusion—in dynamical systems due to issues like oversmoothing or excessive constraints imposed by unitary convolutions. This work proposes a relaxed unitary convolution mechanism that preserves the smoothness required by physical systems while maintaining feature discriminability. Notably, it is the first to extend both unitary and relaxed unitary convolutions from graph-structured data to regular grids. By integrating graph neural networks with numerical modeling of partial differential equations and physics-informed dynamics learning, the proposed method achieves significant performance gains over strong baselines—including grid-aware Transformers and equivariant neural networks—on mesh-based tasks such as heat equation modeling, wave propagation, and weather forecasting.
📝 Abstract
Modern neural networks have shown promise for solving partial differential equations over surfaces, often by discretizing the surface as a mesh and learning with a mesh-aware graph neural network. However, graph neural networks suffer from oversmoothing, where a node's features become increasingly similar to those of its neighbors. Unitary graph convolutions, which are mathematically constrained to preserve smoothness, have been proposed to address this issue. Despite this, in many physical systems, such as diffusion processes, smoothness naturally increases and unitarity may be overconstraining. In this paper, we systematically study the smoothing effects of different GNNs for dynamics modeling and prove that unitary convolutions hurt performance for such tasks. We propose relaxed unitary convolutions that balance smoothness preservation with the natural smoothing required for physical systems. We also generalize unitary and relaxed unitary convolutions from graphs to meshes. In experiments on PDEs such as the heat and wave equations over complex meshes and on weather forecasting, we find that our method outperforms several strong baselines, including mesh-aware transformers and equivariant neural networks.