🤖 AI Summary
This study addresses the lack of falsifiability in current evaluations of large language models’ scientific reasoning capabilities, which hinders verification of their genuine capacity for novel scientific discovery. Drawing on Popperian philosophy of science, this work systematically introduces the principle of falsifiability into the field, revealing methodological biases in existing assessments—such as opaque training data and the neglect of failure cases—through rigorous philosophical and methodological critique. The authors propose a scientific validation framework grounded in falsifiability, transparency, and reproducibility, and further develop a set of research guidelines tailored to large language model–based scientific reasoning. This contribution establishes both a theoretical foundation and methodological standards for AI-driven scientific discovery.
📝 Abstract
Recent reports claim that Large Language Models (LLMs) have achieved the ability to derive new science and exhibit human-level general intelligence. We argue that such claims are not rigorous scientific claims, as they do not satisfy Popper's refutability principle (often termed falsifiability), which requires that scientific statements be capable of being disproven. We identify several methodological pitfalls in current AI research on reasoning, including the inability to verify the novelty of findings due to opaque and non-searchable training data, the lack of reproducibility caused by continuous model updates, and the omission of human-interaction transcripts, which obscures the true source of scientific discovery. Additionally, the absence of counterfactuals and data on failed attempts creates a selection bias that may exaggerate LLM capabilities. To address these challenges, we propose guidelines for scientific transparency and reproducibility for research on reasoning by LLMs. Establishing such guidelines is crucial for both scientific integrity and the ongoing societal debates regarding fair data usage.