🤖 AI Summary
This study addresses the lack of systematic evaluation of large language models’ (LLMs) ability to identify and localize core scientific errors in peer review. To this end, we introduce FLAWS—the first automated benchmark for research paper error localization. We propose a novel error-injection method that generates semantically coherent, content-relevant, and human-indistinguishable erroneous text by weakening authors’ core claims, leveraging real published papers and expert peer reviews with LLM assistance. We design a ranking-based automated evaluation framework, using top-k fragment error localization accuracy as the primary metric. Evaluated on 713 paper–error pairs across five state-of-the-art models, GPT-5 achieves 39.1% accuracy at k=10. This work establishes the first scalable, high-fidelity, and scientifically rigorous quantitative assessment of LLMs’ peer-review capabilities.
📝 Abstract
The identification and localization of errors is a core task in peer review, yet the exponential growth of scientific output has made it increasingly difficult for human reviewers to reliably detect errors given the limited pool of experts. Recent advances in Large Language Models (LLMs) have sparked interest in their potential to support such evaluation tasks, from academic peer review to automated scientific assessment. However, despite the growing use of LLMs in review systems, their capabilities to pinpoint errors remain underexplored. In this work, we introduce Fault Localization Across Writing in Science (FLAWS), an automated benchmark consisting of 713 paper-error pairs designed to evaluate how effectively LLMs detect errors that undermine key claims in research papers. We construct the benchmark by systematically inserting claim-invalidating errors into peer-reviewed papers using LLMs, paired with an automated evaluation metric that measures whether models can identify and localize these errors. Developing such a benchmark presents unique challenges that we overcome: ensuring that the inserted errors are well-defined, challenging, and relevant to the content of the paper, avoiding artifacts that would make identification trivial, and designing a scalable, automated evaluation metric. On the resulting benchmark, we evaluate five frontier LLMs: Claude Sonnet 4.5, DeepSeek Reasoner v3.1, Gemini 2.5 Pro, GPT 5, and Grok 4. Among these, GPT 5 is the top-performing model, achieving 39.1% identification accuracy when k=10, where k is the number of top-ranked error text candidates generated by the LLM.