Evaluating Multi-Hop Reasoning in Large Language Models: A Chemistry-Centric Case Study

📅 2025-04-23
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Large language models (LLMs) exhibit significant limitations in multi-hop compositional reasoning within chemistry, yet no dedicated benchmark exists to systematically evaluate this capability. Method: We introduce the first chemical multi-hop reasoning benchmark, built via an expert-validated, fully automated pipeline: (1) extracting chemical entities from scientific literature, (2) integrating external knowledge bases to construct a domain-specific knowledge graph, and (3) generating high-quality multi-hop question-answer pairs. We comprehensively assess LLMs’ reasoning performance—with and without retrieval-augmented generation (RAG). Results: Current LLMs show substantial deficits in chemical multi-hop reasoning. While RAG improves performance, even oracle-level retrieval fails to eliminate fundamental compositional reasoning errors—confirming the intrinsic difficulty of chaining heterogeneous chemical facts. This work establishes a methodological paradigm and publicly available benchmark for scalable, cross-domain scientific reasoning evaluation.

Technology Category

Application Category

📝 Abstract
In this study, we introduced a new benchmark consisting of a curated dataset and a defined evaluation process to assess the compositional reasoning capabilities of large language models within the chemistry domain. We designed and validated a fully automated pipeline, verified by subject matter experts, to facilitate this task. Our approach integrates OpenAI reasoning models with named entity recognition (NER) systems to extract chemical entities from recent literature, which are then augmented with external knowledge bases to form a comprehensive knowledge graph. By generating multi-hop questions across these graphs, we assess LLM performance in both context-augmented and non-context augmented settings. Our experiments reveal that even state-of-the-art models face significant challenges in multi-hop compositional reasoning. The results reflect the importance of augmenting LLMs with document retrieval, which can have a substantial impact on improving their performance. However, even perfect retrieval accuracy with full context does not eliminate reasoning errors, underscoring the complexity of compositional reasoning. This work not only benchmarks and highlights the limitations of current LLMs but also presents a novel data generation pipeline capable of producing challenging reasoning datasets across various domains. Overall, this research advances our understanding of reasoning in computational linguistics.
Problem

Research questions and friction points this paper is trying to address.

Assessing multi-hop reasoning in LLMs for chemistry tasks
Developing automated pipeline for chemical knowledge graph creation
Evaluating impact of context augmentation on LLM performance
Innovation

Methods, ideas, or system contributions that make the work stand out.

Automated pipeline with expert validation
NER and knowledge graph integration
Multi-hop question generation for evaluation
🔎 Similar Papers
No similar papers found.
Mohammad Khodadad
Mohammad Khodadad
Research Assistant, McMaster University
Machine LearningGraph TheoryBioinformaticsReinforcement LearningComputer Vision
A
Ali Shiraee Kasmaee
Department of Computational Science and Engineering, McMaster University, Canada; BASF Canada Inc., Canada
M
Mahdi Astaraki
Department of Computational Science and Engineering, McMaster University, Canada
N
Nick Sherck
BASF Corporation, USA
H
H. Mahyar
Department of Computational Science and Engineering, McMaster University, Canada
Soheila Samiee
Soheila Samiee
Senior Applied Research Scientist, BASF
Large Language ModelsTabular deep learningMachine LearningTime-series analysisNeuroscience