How well can off-the-shelf LLMs elucidate molecular structures from mass spectra using chain-of-thought reasoning?

📅 2026-01-09
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Directly inferring complete molecular structures from tandem mass spectrometry (MS/MS) data remains challenging due to the complexity of fragmentation patterns and the vastness of chemical space. This work presents the first systematic formalization of expert chemists’ reasoning as chain-of-thought (CoT) prompts to evaluate the structural inference capabilities of leading large language models—including Claude-3.5-Sonnet, GPT-4o-mini, and Llama-3—under zero-shot conditions. Leveraging the MassSpecGym dataset and a multidimensional evaluation framework assessing SMILES validity, molecular formula consistency, and structural similarity, the study finds that while models can generate syntactically valid and partially plausible structures, they fall short of achieving chemical accuracy and struggle to reliably link their reasoning steps to correct predictions.

Technology Category

Application Category

📝 Abstract
Mass spectrometry (MS) is a powerful analytical technique for identifying small molecules, yet determining complete molecular structures directly from tandem mass spectra (MS/MS) remains a long-standing challenge due to complex fragmentation patterns and the vast diversity of chemical space. Recent progress in large language models (LLMs) has shown promise for reasoning-intensive scientific tasks, but their capability for chemical interpretation is still unclear. In this work, we introduce a Chain-of-Thought (CoT) prompting framework and benchmark that evaluate how LLMs reason about mass spectral data to predict molecular structures. We formalize expert chemists'reasoning steps-such as double bond equivalent (DBE) analysis, neutral loss identification, and fragment assembly-into structured prompts and assess multiple state-of-the-art LLMs (Claude-3.5-Sonnet, GPT-4o-mini, and Llama-3 series) in a zero-shot setting using the MassSpecGym dataset. Our evaluation across metrics of SMILES validity, formula consistency, and structural similarity reveals that while LLMs can produce syntactically valid and partially plausible structures, they fail to achieve chemical accuracy or link reasoning to correct molecular predictions. These findings highlight both the interpretive potential and the current limitations of LLM-based reasoning for molecular elucidation, providing a foundation for future work that combines domain knowledge and reinforcement learning to achieve chemically grounded AI reasoning.
Problem

Research questions and friction points this paper is trying to address.

molecular structure elucidation
mass spectrometry
large language models
chain-of-thought reasoning
MS/MS
Innovation

Methods, ideas, or system contributions that make the work stand out.

Chain-of-Thought prompting
mass spectrometry
molecular structure elucidation
large language models
zero-shot reasoning
🔎 Similar Papers
No similar papers found.
Y
Yufeng Wang
Department of Computer Science, Stony Brook University
L
Lu Wei
Department of Computer Science, Stony Brook University
L
Lin Liu
Department of Chemistry, Stanford University
Hao Xu
Hao Xu
Research Fellow at Harvard Medical School
AI4ScienceAI4HealthcareChemistryBiology
Haibin Ling
Haibin Ling
Chair Professor, Westlake University
computer visionaugmented realitymedical image analysismachine learningAI for science