🤖 AI Summary
Existing molecular pretraining methods rely on paired 2D/3D data to prevent modality collapse, limiting applicability in scenarios with missing or costly single-modality data. This work proposes FlexMol—the first unified molecular pretraining framework supporting both unimodal and bimodal inputs. Methodologically, FlexMol employs a dual-branch encoder-decoder architecture with parameter sharing and a cross-modal decoder to jointly model 2D graph topology and 3D geometric features. It is the first to enable flexible training on unpaired 2D or 3D data, mitigating collapse via cross-modal reconstruction and contrastive learning. Experimentally, FlexMol achieves state-of-the-art performance across diverse molecular property prediction tasks. Crucially, it demonstrates strong robustness under data incompleteness—e.g., when only 2D or only 3D data is available—thereby significantly enhancing representation generalizability for drug discovery and materials design.
📝 Abstract
Molecular representation learning plays a crucial role in advancing applications such as drug discovery and material design. Existing work leverages 2D and 3D modalities of molecular information for pre-training, aiming to capture comprehensive structural and geometric insights. However, these methods require paired 2D and 3D molecular data to train the model effectively and prevent it from collapsing into a single modality, posing limitations in scenarios where a certain modality is unavailable or computationally expensive to generate. To overcome this limitation, we propose FlexMol, a flexible molecule pre-training framework that learns unified molecular representations while supporting single-modality input. Specifically, inspired by the unified structure in vision-language models, our approach employs separate models for 2D and 3D molecular data, leverages parameter sharing to improve computational efficiency, and utilizes a decoder to generate features for the missing modality. This enables a multistage continuous learning process where both modalities contribute collaboratively during training, while ensuring robustness when only one modality is available during inference. Extensive experiments demonstrate that FlexMol achieves superior performance across a wide range of molecular property prediction tasks, and we also empirically demonstrate its effectiveness with incomplete data. Our code and data are available at https://github.com/tewiSong/FlexMol.