🤖 AI Summary
Existing graph self-supervised learning methods for molecular representation often overlook chemically relevant substructural information, limiting their ability to effectively model key fragments that govern molecular properties. To address this, we propose GraSPNet, a novel framework that introduces fragment-level modeling into self-supervised molecular graph learning without requiring a predefined vocabulary. By leveraging unsupervised fragment decomposition, GraSPNet performs hierarchical message passing and masked semantic prediction across both atomic and fragment granularities. This enables joint atom-fragment multi-resolution self-supervised learning, significantly enhancing the chemical interpretability, representational capacity, and cross-task transferability of learned representations. Extensive experiments demonstrate that GraSPNet consistently outperforms current graph self-supervised approaches on multiple molecular property prediction benchmarks.
📝 Abstract
Graph self-supervised learning (GSSL) has demonstrated strong potential for generating expressive graph embeddings without the need for human annotations, making it particularly valuable in domains with high labeling costs such as molecular graph analysis. However, existing GSSL methods mostly focus on node- or edge-level information, often ignoring chemically relevant substructures which strongly influence molecular properties. In this work, we propose Graph Semantic Predictive Network (GraSPNet), a hierarchical self-supervised framework that explicitly models both atomic-level and fragment-level semantics. GraSPNet decomposes molecular graphs into chemically meaningful fragments without predefined vocabularies and learns node- and fragment-level representations through multi-level message passing with masked semantic prediction at both levels. This hierarchical semantic supervision enables GraSPNet to learn multi-resolution structural information that is both expressive and transferable. Extensive experiments on multiple molecular property prediction benchmarks demonstrate that GraSPNet learns chemically meaningful representations and consistently outperforms state-of-the-art GSSL methods in transfer learning settings.