π€ AI Summary
Existing deep learning approaches struggle to simultaneously address the diversity of material property prediction tasks and the heterogeneity of materials data, resulting in limited generalization capability. To overcome this, we propose a modular deep learning paradigm tailored for materials science: first, pretraining task-specialized, reusable modules; then, dynamically assembling them via differentiable composition mechanisms to enable adaptive, task-specific collaborative modeling. This paradigm departs from the conventional βpretrain-fine-tuneβ paradigm by integrating multi-task pretraining, modular network architecture, and joint optimization strategies. Evaluated on 17 benchmark datasets, our method achieves an average improvement of 14% over state-of-the-art baselines. It notably enhances few-shot generalization and continual learning performance, demonstrating strong deployment feasibility and scalability in real-world materials discovery scenarios.
π Abstract
Deep learning methods for material property prediction have been widely explored to advance materials discovery. However, the prevailing pre-train then fine-tune paradigm often fails to address the inherent diversity and disparity of material tasks. To overcome these challenges, we introduce MoMa, a Modular framework for Materials that first trains specialized modules across a wide range of tasks and then adaptively composes synergistic modules tailored to each downstream scenario. Evaluation across 17 datasets demonstrates the superiority of MoMa, with a substantial 14% average improvement over the strongest baseline. Few-shot and continual learning experiments further highlight MoMa's potential for real-world applications. Pioneering a new paradigm of modular material learning, MoMa will be open-sourced to foster broader community collaboration.