🤖 AI Summary
To address the high computational cost, memory consumption, and poor scalability of graph neural networks (GNNs) in simulating large-scale grain growth, this work proposes a CNN-GNN hybrid architecture. A bijective autoencoder enables lossless spatial compression, mapping high-dimensional microstructures into a low-dimensional latent space; GNNs efficiently evolve grain topology within this latent space, while CNNs model local spatiotemporal dynamics. This design balances representational fidelity with scalability. Experiments on a 160³ grid demonstrate an 117× reduction in memory usage and an 115× speedup in inference time, alongside a reduction in message-passing layers from 12 to 3. Moreover, the method achieves significantly improved long-term prediction accuracy and stability over pure GNN baselines. The core contribution is the first integration of bijective compression into GNN-based microstructure modeling, enabling high-fidelity, low-overhead grain growth simulation.
📝 Abstract
Graph neural networks (GNN) have emerged as a promising machine learning method for microstructure simulations such as grain growth. However, accurate modeling of realistic grain boundary networks requires large simulation cells, which GNN has difficulty scaling up to. To alleviate the computational costs and memory footprint of GNN, we propose a hybrid architecture combining a convolutional neural network (CNN) based bijective autoencoder to compress the spatial dimensions, and a GNN that evolves the microstructure in the latent space of reduced spatial sizes. Our results demonstrate that the new design significantly reduces computational costs with using fewer message passing layer (from 12 down to 3) compared with GNN alone. The reduction in computational cost becomes more pronounced as the spatial size increases, indicating strong computational scalability. For the largest mesh evaluated (160^3), our method reduces memory usage and runtime in inference by 117x and 115x, respectively, compared with GNN-only baseline. More importantly, it shows higher accuracy and stronger spatiotemporal capability than the GNN-only baseline, especially in long-term testing. Such combination of scalability and accuracy is essential for simulating realistic material microstructures over extended time scales. The improvements can be attributed to the bijective autoencoder's ability to compress information losslessly from spatial domain into a high dimensional feature space, thereby producing more expressive latent features for the GNN to learn from, while also contributing its own spatiotemporal modeling capability. The training was optimized to learn from the stochastic Potts Monte Carlo method. Our findings provide a highly scalable approach for simulating grain growth.