🤖 AI Summary
Addressing the challenge of cross-modal retrieval from textual descriptions to crystal structures in materials science, this paper introduces the first structure–text contrastive joint pretraining framework tailored for crystalline materials. Methodologically, it integrates a Crystal Graph Neural Network (CGNN) with a modified BERT encoder to construct a unified cross-modal embedding space; critically, it pioneers the application of contrastive learning to materials multimodal pretraining, enabling interpretable and semantically aligned text–structure representations. Trained on a large-scale corpus of literature–structure pairs, the framework significantly outperforms existing baselines in text-driven crystal screening—achieving a 23.6% average improvement in Recall@10. Visualization and ablation studies confirm that the learned embeddings effectively capture functional and performance similarities among materials, thereby supporting intuitive, traceable, and physics-informed materials discovery.
📝 Abstract
Understanding structure-property relationships is an essential yet challenging aspect of materials discovery and development. To facilitate this process, recent studies in materials informatics have sought latent embedding spaces of crystal structures to capture their similarities based on properties and functionalities. However, abstract feature-based embedding spaces are human-unfriendly and prevent intuitive and efficient exploration of the vast materials space. Here we introduce Contrastive Language--Structure Pre-training (CLaSP), a learning paradigm for constructing crossmodal embedding spaces between crystal structures and texts. CLaSP aims to achieve material embeddings that 1) capture property- and functionality-related similarities between crystal structures and 2) allow intuitive retrieval of materials via user-provided description texts as queries. To compensate for the lack of sufficient datasets linking crystal structures with textual descriptions, CLaSP leverages a dataset of over 400,000 published crystal structures and corresponding publication records, including paper titles and abstracts, for training. We demonstrate the effectiveness of CLaSP through text-based crystal structure screening and embedding space visualization.