Improving Large Molecular Language Model via Relation-aware Multimodal Collaboration

📅 2026-01-18
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Current large molecular language models often suffer from hallucinations and limited generalization due to their inability to effectively integrate multimodal information—such as 1D sequences, 2D graph structures, and 3D conformations. To address this, this work proposes CoLLaMo, a model that enables fine-grained, relation-guided information interaction at the atomic level through a hierarchical collaborative projector and a relation-aware multimodal co-attention mechanism. The study further introduces a novel automatic evaluation framework tailored for molecular tasks, incorporating hallucination quantification metrics and GPT-based description quality assessment, thereby overcoming the limitations of conventional token-level evaluations. CoLLaMo significantly outperforms existing methods across diverse tasks—including molecular captioning, property-based question answering, motif counting, and IUPAC naming—demonstrating superior multimodal understanding and generalization capabilities.

Technology Category

Application Category

📝 Abstract
Large language models (LLMs) have demonstrated their instruction-following capabilities and achieved powerful performance on various tasks. Inspired by their success, recent works in the molecular domain have led to the development of large molecular language models (LMLMs) that integrate 1D molecular strings or 2D molecular graphs into the language models. However, existing LMLMs often suffer from hallucination and limited robustness, largely due to inadequate integration of diverse molecular modalities such as 1D sequences, 2D molecular graphs, and 3D conformations. To address these limitations, we propose CoLLaMo, a large language model-based molecular assistant equipped with a multi-level molecular modality-collaborative projector. The relation-aware modality-collaborative attention mechanism in the projector facilitates fine-grained and relation-guided information exchange between atoms by incorporating 2D structural and 3D spatial relations. Furthermore, we present a molecule-centric new automatic measurement, including a hallucination assessment metric and GPT-based caption quality evaluation to address the limitations of token-based generic evaluation metrics (i.e., BLEU) widely used in assessing molecular comprehension of LMLMs. Our extensive experiments demonstrate that our CoLLaMo enhances the molecular modality generalization capabilities of LMLMs, achieving the best performance on multiple tasks, including molecule captioning, computed property QA, descriptive property QA, motif counting, and IUPAC name prediction.
Problem

Research questions and friction points this paper is trying to address.

large molecular language models
multimodal integration
hallucination
molecular robustness
molecular modality
Innovation

Methods, ideas, or system contributions that make the work stand out.

relation-aware attention
multimodal collaboration
molecular language model
3D conformation integration
hallucination assessment
🔎 Similar Papers
No similar papers found.