🤖 AI Summary
To address the low efficiency, reliance on long simulation trajectories, and difficulty in integrating experimental videos for predicting temporal microstructural evolution of strain-alloy systems under elastic fields, this paper proposes a Cascaded Convolutional Recurrent Neural Network (CRCNN) framework. The framework jointly performs elastic parameter inversion and long-term microstructural evolution prediction by training on phase-field simulation data, thereby unifying spatiotemporal feature learning with physical parameter identification. It enables inference of unknown external parameters from short simulated trajectories or experimental videos and achieves stable extrapolation over large computational domains and near-spinodal decomposition critical conditions. Experiments demonstrate high accuracy in predicting both lattice misfit and microstructural evolution across multiple misfit scenarios, with significantly reduced temporal extrapolation error—thereby overcoming key computational bottlenecks inherent to conventional phase-field simulations.
📝 Abstract
We introduce a unified machine-learning framework designed to conveniently tackle the temporal evolution of alloy microstructures under the influence of an elastic field. This approach allows for the simultaneous extraction of elastic parameters from a short trajectory and for the prediction of further microstructure evolution under their influence. This is demonstrated by focusing on spinodal decomposition in the presence of a lattice mismatch eta, and by carrying out an extensive comparison between the ground-truth evolution supplied by phase field simulations and the predictions of suitable convolutional recurrent neural network architectures. The two tasks may then be performed subsequently into a cascade framework. Under a wide spectrum of misfit conditions, the here-presented cascade model accurately predicts eta and the full corresponding microstructure evolution, also when approaching critical conditions for spinodal decomposition. Scalability to larger computational domain sizes and mild extrapolation errors in time (for time sequences five times longer than the sampled ones during training) are demonstrated. The proposed framework is general and can be applied beyond the specific, prototypical system considered here as an example. Intriguingly, experimental videos could be used to infer unknown external parameters, prior to simulating further temporal evolution.