Assessing the Chemical Intelligence of Large Language Models

📅 2025-05-12
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Prior work has largely overlooked the intrinsic chemical reasoning capabilities of large language models (LLMs)—i.e., their ability to perform native, tool-free, language-driven chemical understanding and deduction. Method: We introduce ChemIQ, the first short-answer benchmark for organic chemistry (796 questions), covering molecular representation, SMILES–IUPAC interconversion, and NMR spectral interpretation. Contribution/Results: We demonstrate, for the first time, that reasoning-optimized LLMs (e.g., o3-mini) can autonomously execute multi-step symbolic chemical operations and human-like logical inference: achieving 59% accuracy on SMILES–IUPAC translation—substantially surpassing GPT-4o’s 7%—and 74% success rate in NMR-based structural elucidation for molecules with ≤10 heavy atoms; notably, it fully resolves a 21-heavy-atom molecule from a single NMR example. These results establish that LLMs possess significant endogenous chemical reasoning capacity, providing a foundational methodology for tool-free chemical AI.

Technology Category

Application Category

📝 Abstract
Large Language Models are versatile, general-purpose tools with a wide range of applications. Recently, the advent of"reasoning models"has led to substantial improvements in their abilities in advanced problem-solving domains such as mathematics and software engineering. In this work, we assessed the ability of reasoning models to directly perform chemistry tasks, without any assistance from external tools. We created a novel benchmark, called ChemIQ, which consists of 796 questions assessing core concepts in organic chemistry, focused on molecular comprehension and chemical reasoning. Unlike previous benchmarks, which primarily use multiple choice formats, our approach requires models to construct short-answer responses, more closely reflecting real-world applications. The reasoning models, exemplified by OpenAI's o3-mini, correctly answered 28%-59% of questions depending on the reasoning level used, with higher reasoning levels significantly increasing performance on all tasks. These models substantially outperformed the non-reasoning model, GPT-4o, which achieved only 7% accuracy. We found that Large Language Models can now convert SMILES strings to IUPAC names, a task earlier models were unable to perform. Additionally, we show that the latest reasoning models can elucidate structures from 1H and 13C NMR data, correctly generating SMILES strings for 74% of molecules containing up to 10 heavy atoms, and in one case solving a structure comprising 21 heavy atoms. For each task, we found evidence that the reasoning process mirrors that of a human chemist. Our results demonstrate that the latest reasoning models have the ability to perform advanced chemical reasoning.
Problem

Research questions and friction points this paper is trying to address.

Assessing chemical reasoning abilities of Large Language Models
Evaluating performance on organic chemistry tasks without external tools
Measuring accuracy in molecular comprehension and NMR data interpretation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Created ChemIQ benchmark for organic chemistry assessment
Used reasoning models for direct chemistry tasks
Achieved high accuracy in NMR data interpretation
N
Nicholas T. Runcie
Department of Statistics, University of Oxford, Oxford, UK
C
Charlotte M. Deane
Department of Statistics, University of Oxford, Oxford, UK
Fergus Imrie
Fergus Imrie
University of Oxford
Machine LearningDrug DiscoveryHealthcareCheminformaticsBioinformatics