Advancing Molecular Graph-Text Pre-training via Fine-grained Alignment

📅 2024-09-21
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing molecular graph–text alignment methods model molecules holistically, neglecting functional substructures (motifs) that critically determine molecular properties, thereby compromising generalizability and interpretability. To address this, we propose FineMolTex—a novel framework enabling joint coarse-grained (molecular-level) and fine-grained (motif-level) graph–text representation learning. Its core innovations include: (i) a motif-semantic word importance-guided masked multimodal modeling strategy; (ii) integration of contrastive alignment, graph neural networks (GNNs), and text encoders; and (iii) a dedicated motif extraction and importance scoring mechanism to achieve interpretable motif–semantic alignment. On text-driven molecular editing, FineMolTex achieves a 238% performance gain; it also sets new state-of-the-art results in property prediction and inverse molecular design. Case studies confirm its precise capture of motif–semantic associations, demonstrating strong utility for drug and catalyst discovery.

Technology Category

Application Category

📝 Abstract
Understanding molecular structure and related knowledge is crucial for scientific research. Recent studies integrate molecular graphs with their textual descriptions to enhance molecular representation learning. However, they focus on the whole molecular graph and neglect frequently occurring subgraphs, known as motifs, which are essential for determining molecular properties. Without such fine-grained knowledge, these models struggle to generalize to unseen molecules and tasks that require motif-level insights. To bridge this gap, we propose FineMolTex, a novel Fine-grained Molecular graph-Text pre-training framework to jointly learn coarse-grained molecule-level knowledge and fine-grained motif-level knowledge. Specifically, FineMolTex consists of two pre-training tasks: a contrastive alignment task for coarse-grained matching and a masked multi-modal modeling task for fine-grained matching. In particular, the latter predicts the labels of masked motifs and words, which are selected based on their importance. By leveraging insights from both modalities, FineMolTex is able to understand the fine-grained matching between motifs and words. Finally, we conduct extensive experiments across three downstream tasks, achieving up to 238% improvement in the text-based molecule editing task. Additionally, our case studies reveal that FineMolTex successfully captures fine-grained knowledge, potentially offering valuable insights for drug discovery and catalyst design.
Problem

Research questions and friction points this paper is trying to address.

Molecular graph-text alignment
Fine-grained motif-level insights
Enhanced molecular representation learning
Innovation

Methods, ideas, or system contributions that make the work stand out.

Fine-grained graph-text alignment
Contrastive and masked modeling tasks
Enhanced molecular representation learning
🔎 Similar Papers
No similar papers found.