🤖 AI Summary
Large language models (LLMs) exhibit significant limitations in multi-hop compositional reasoning within chemistry, yet no dedicated benchmark exists to systematically evaluate this capability.
Method: We introduce the first chemical multi-hop reasoning benchmark, built via an expert-validated, fully automated pipeline: (1) extracting chemical entities from scientific literature, (2) integrating external knowledge bases to construct a domain-specific knowledge graph, and (3) generating high-quality multi-hop question-answer pairs. We comprehensively assess LLMs’ reasoning performance—with and without retrieval-augmented generation (RAG).
Results: Current LLMs show substantial deficits in chemical multi-hop reasoning. While RAG improves performance, even oracle-level retrieval fails to eliminate fundamental compositional reasoning errors—confirming the intrinsic difficulty of chaining heterogeneous chemical facts. This work establishes a methodological paradigm and publicly available benchmark for scalable, cross-domain scientific reasoning evaluation.
📝 Abstract
In this study, we introduced a new benchmark consisting of a curated dataset and a defined evaluation process to assess the compositional reasoning capabilities of large language models within the chemistry domain. We designed and validated a fully automated pipeline, verified by subject matter experts, to facilitate this task. Our approach integrates OpenAI reasoning models with named entity recognition (NER) systems to extract chemical entities from recent literature, which are then augmented with external knowledge bases to form a comprehensive knowledge graph. By generating multi-hop questions across these graphs, we assess LLM performance in both context-augmented and non-context augmented settings. Our experiments reveal that even state-of-the-art models face significant challenges in multi-hop compositional reasoning. The results reflect the importance of augmenting LLMs with document retrieval, which can have a substantial impact on improving their performance. However, even perfect retrieval accuracy with full context does not eliminate reasoning errors, underscoring the complexity of compositional reasoning. This work not only benchmarks and highlights the limitations of current LLMs but also presents a novel data generation pipeline capable of producing challenging reasoning datasets across various domains. Overall, this research advances our understanding of reasoning in computational linguistics.