MOOSE-Chem: Large Language Models for Rediscovering Unseen Chemistry Scientific Hypotheses

📅 2024-10-09
🏛️ arXiv.org
📈 Citations: 1
Influential: 0
📄 PDF
🤖 AI Summary
This work investigates whether large language models (LLMs) can autonomously generate novel, scientifically valid hypotheses in chemistry—without domain-specific constraints—solely from research background texts (e.g., problem statements or reviews). Method: We introduce the first high-quality benchmark for chemical hypothesis discovery: a manually curated dataset of 51 top-tier 2024 journal papers, formalized under a computable “background + inspiration → hypothesis” paradigm. Our approach employs a multi-stage LLM agent framework integrating inspiration retrieval, hypothesis derivation, and similarity-driven ranking, synergistically combining retrieval-augmented generation (RAG) with multi-agent collaboration. Results: Experiments demonstrate that LLMs trained exclusively on pre-2023 data can reconstruct the majority of original hypotheses with high fidelity in an unsupervised setting, accurately capturing core scientific innovations. This constitutes the first systematic validation of LLMs’ capability to drive original, domain-grounded hypothesis generation in chemistry.

Technology Category

Application Category

📝 Abstract
Scientific discovery contributes largely to human society's prosperity, and recent progress shows that LLMs could potentially catalyze this process. However, it is still unclear whether LLMs can discover novel and valid hypotheses in chemistry. In this work, we investigate this central research question: Can LLMs automatically discover novel and valid chemistry research hypotheses given only a chemistry research background (consisting of a research question and/or a background survey), without limitation on the domain of the research question? After extensive discussions with chemistry experts, we propose an assumption that a majority of chemistry hypotheses can be resulted from a research background and several inspirations. With this key insight, we break the central question into three smaller fundamental questions. In brief, they are: (1) given a background question, whether LLMs can retrieve good inspirations; (2) with background and inspirations, whether LLMs can lead to hypothesis; and (3) whether LLMs can identify good hypotheses to rank them higher. To investigate these questions, we construct a benchmark consisting of 51 chemistry papers published in Nature, Science, or a similar level in 2024 (all papers are only available online since 2024). Every paper is divided by chemistry PhD students into three components: background, inspirations, and hypothesis. The goal is to rediscover the hypothesis, given only the background and a large randomly selected chemistry literature corpus consisting the ground truth inspiration papers, with LLMs trained with data up to 2023. We also develop an LLM-based multi-agent framework that leverages the assumption, consisting of three stages reflecting the three smaller questions. The proposed method can rediscover many hypotheses with very high similarity with the ground truth ones, covering the main innovations.
Problem

Research questions and friction points this paper is trying to address.

Can LLMs discover novel chemistry hypotheses from research backgrounds?
Do LLMs retrieve inspirations and generate valid hypotheses effectively?
Can LLMs rank and identify high-quality chemistry hypotheses accurately?
Innovation

Methods, ideas, or system contributions that make the work stand out.

LLMs rediscover chemistry hypotheses from background.
Multi-agent framework evaluates hypothesis generation stages.
Benchmark tests LLMs on 51 high-impact chemistry papers.