🤖 AI Summary
This study addresses the lack of automated assessment of methodological quality and risk of bias (RoB) in biomedical literature by introducing RoBBR, the first NLP benchmark specifically designed for RoB evaluation. RoBBR is built upon 500+ peer-reviewed papers and 2,000 expert-annotated instances, covering four fine-grained tasks: study design identification, RoB domain classification, bias type determination, and evidence strength rating. It is the first effort to operationalize established RoB frameworks—such as those from Cochrane—in a rigorously evaluable NLP setting, incorporating a strict content alignment verification protocol. Empirical evaluation reveals that state-of-the-art large language models underperform human experts by 32.7 percentage points in average F1 score across all four tasks, underscoring the substantial challenge of automating methodological appraisal. The benchmark—including its dataset, annotation guidelines, and open-source code—is publicly released to advance trustworthy AI for scientific evidence assessment.
📝 Abstract
Systems that answer questions by reviewing the scientific literature are becoming increasingly feasible. To draw reliable conclusions, these systems should take into account the quality of available evidence, placing more weight on studies that use a valid methodology. We present a benchmark for measuring the methodological strength of biomedical papers, drawing on the risk-of-bias framework used for systematic reviews. The four benchmark tasks, drawn from more than 500 papers, cover the analysis of research study methodology, followed by evaluation of risk of bias in these studies. The benchmark contains 2000 expert-generated bias annotations, and a human-validated pipeline for fine-grained alignment with research paper content. We evaluate a range of large language models on the benchmark, and find that these models fall significantly short of expert-level performance. By providing a standardized tool for measuring judgments of study quality, the benchmark can help to guide systems that perform large-scale aggregation of scientific data. The dataset is available at https://github.com/RoBBR-Benchmark/RoBBR.