Evaluating the Ability of Large Language Models to Identify Adherence to CONSORT Reporting Guidelines in Randomized Controlled Trials: A Methodological Evaluation Study

📅 2025-11-17
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Current large language models (LLMs) lack systematic evaluation for detecting CONSORT 2010 guideline adherence in published randomized controlled trials (RCTs), particularly in zero-shot settings. Method: We constructed a gold-standard dataset of 150 RCTs annotated across three classes—compliant, non-compliant, and not applicable—and evaluated six LLMs using macro-F1 and Cohen’s kappa, supplemented by item-level analysis and qualitative error tracing. Contribution/Results: Gemini-2.5-Flash and DeepSeek-R1 achieved the highest macro-F1 (0.634), yet exhibited strong bias toward “compliant” classification (F1 > 0.850), with markedly lower performance on “non-compliant” and “not applicable” categories (F1 < 0.400); GPT-4o scored only 0.521. Findings indicate that state-of-the-art LLMs cannot reliably identify reporting omissions or methodological flaws. While potentially useful as preliminary screening aids, they remain inadequate substitutes for human expert review—establishing an empirical benchmark and critical insights for refining LLM applications in clinical trial reporting assessment.

Technology Category

Application Category

📝 Abstract
The Consolidated Standards of Reporting Trials statement is the global benchmark for transparent and high-quality reporting of randomized controlled trials. Manual verification of CONSORT adherence is a laborious, time-intensive process that constitutes a significant bottleneck in peer review and evidence synthesis. This study aimed to systematically evaluate the accuracy and reliability of contemporary LLMs in identifying the adherence of published RCTs to the CONSORT 2010 statement under a zero-shot setting. We constructed a golden standard dataset of 150 published RCTs spanning diverse medical specialties. The primary outcome was the macro-averaged F1-score for the three-class classification task, supplemented by item-wise performance metrics and qualitative error analysis. Overall model performance was modest. The top-performing models, Gemini-2.5-Flash and DeepSeek-R1, achieved nearly identical macro F1 scores of 0.634 and Cohen's Kappa coefficients of 0.280 and 0.282, respectively, indicating only fair agreement with expert consensus. A striking performance disparity was observed across classes: while most models could identify compliant items with high accuracy (F1 score > 0.850), they struggled profoundly with identifying non-compliant and not applicable items, where F1 scores rarely exceeded 0.400. Notably, some high-profile models like GPT-4o underperformed, achieving a macro F1-score of only 0.521. LLMs show potential as preliminary screening assistants for CONSORT checks, capably identifying well-reported items. However, their current inability to reliably detect reporting omissions or methodological flaws makes them unsuitable for replacing human expertise in the critical appraisal of trial quality.
Problem

Research questions and friction points this paper is trying to address.

Evaluating LLMs' ability to identify CONSORT guideline adherence in trials
Assessing automated detection of reporting omissions in randomized controlled trials
Testing LLM reliability for methodological quality appraisal without human expertise
Innovation

Methods, ideas, or system contributions that make the work stand out.

Using LLMs for automated CONSORT adherence screening
Evaluating models under zero-shot classification setting
Benchmarking performance against expert-curated gold standard
🔎 Similar Papers
No similar papers found.
Z
Zhichao He
SUN YAT-SEN MEMORIAL HOSPITAL, Guangdong, China
M
Mouxiao Bian
Shanghai Artificial Intelligence Laboratory, Shanghai, China
J
Jianhong Zhu
SUN YAT-SEN MEMORIAL HOSPITAL, Guangdong, China
J
Jiayuan Chen
Shanghai Artificial Intelligence Laboratory, Shanghai, China
Y
Yunqiu Wang
SUN YAT-SEN MEMORIAL HOSPITAL, Guangdong, China
W
Wenxia Zhao
SUN YAT-SEN MEMORIAL HOSPITAL, Guangdong, China
Tianbin Li
Tianbin Li
Shanghai Artificial Intelligence Laboratory
Machine LearningComputer VisionGeneral Intelligence
B
Bing Han
Shanghai Artificial Intelligence Laboratory, Shanghai, China
J
Jie Xu
Shanghai Artificial Intelligence Laboratory, Shanghai, China
Junyan Wu
Junyan Wu
Ph.D. student from School of Computer Science and Engineering, Sun Yat-sen University
multimedia forensics and security