🤖 AI Summary
Existing cross-modal text–molecule retrieval methods neglect difficulty-aware adaptation and efficiency optimization during training. To address this, we propose CLASS, a curriculum learning framework that— for the first time—integrates dual-modality difficulty quantification, dynamic easy-to-hard sample scheduling, and progressive learning intensity control to enable adaptive evolution of the training process. CLASS encodes molecular graphs via GIN/MPNN and textual descriptions via BERT, jointly optimizing a cross-modal alignment objective. On the ChEBI-20 benchmark, CLASS achieves state-of-the-art retrieval performance (3.2% improvement in Recall@10) with significantly reduced training time, enhanced convergence stability, and 37% lower early-stage computational overhead. Its core contribution lies in establishing a differentiable, adaptive curriculum learning paradigm specifically designed for text–molecule cross-modal retrieval.
📝 Abstract
Cross-modal text-molecule retrieval task bridges molecule structures and natural language descriptions. Existing methods predominantly focus on aligning text modality and molecule modality, yet they overlook adaptively adjusting the learning states at different training stages and enhancing training efficiency. To tackle these challenges, this paper proposes a Curriculum Learning-bAsed croSS-modal text-molecule training framework (CLASS), which can be integrated with any backbone to yield promising performance improvement. Specifically, we quantify the sample difficulty considering both text modality and molecule modality, and design a sample scheduler to introduce training samples via an easy-to-difficult paradigm as the training advances, remarkably reducing the scale of training samples at the early stage of training and improving training efficiency. Moreover, we introduce adaptive intensity learning to increase the training intensity as the training progresses, which adaptively controls the learning intensity across all curriculum stages. Experimental results on the ChEBI-20 dataset demonstrate that our proposed method gains superior performance, simultaneously achieving prominent time savings.