Enhancing Study-Level Inference from Clinical Trial Papers via RL-based Numeric Reasoning

📅 2025-05-28
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the challenges of extracting numerical evidence and inferring outcome-specific conclusions in systematic reviews, this paper pioneers modeling clinical conclusion inference as a quantitative reasoning task. We propose a reinforcement learning (RL)-based numerical extraction framework that integrates medical knowledge–driven logical reasoning. The framework jointly performs key numerical extraction (e.g., event counts, standard deviations), effect size estimation, supervised fine-tuning, and RL-based optimization, and introduces an interpretable, domain-aligned reward model grounded in clinical relevance. Evaluated on the CochraneForest benchmark, our method achieves an F1 score of 89.3%, outperforming strong baselines by 21 percentage points and surpassing a 400B-parameter general-purpose large language model by 9 points. This work demonstrates a breakthrough in high-accuracy, highly interpretable clinical reasoning using a compact, domain-specialized model.

Technology Category

Application Category

📝 Abstract
Systematic reviews in medicine play a critical role in evidence-based decision-making by aggregating findings from multiple studies. A central bottleneck in automating this process is extracting numeric evidence and determining study-level conclusions for specific outcomes and comparisons. Prior work has framed this problem as a textual inference task by retrieving relevant content fragments and inferring conclusions from them. However, such approaches often rely on shallow textual cues and fail to capture the underlying numeric reasoning behind expert assessments. In this work, we conceptualise the problem as one of quantitative reasoning. Rather than inferring conclusions from surface text, we extract structured numerical evidence (e.g., event counts or standard deviations) and apply domain knowledge informed logic to derive outcome-specific conclusions. We develop a numeric reasoning system composed of a numeric data extraction model and an effect estimate component, enabling more accurate and interpretable inference aligned with the domain expert principles. We train the numeric data extraction model using different strategies, including supervised fine-tuning (SFT) and reinforcement learning (RL) with a new value reward model. When evaluated on the CochraneForest benchmark, our best-performing approach -- using RL to train a small-scale number extraction model -- yields up to a 21% absolute improvement in F1 score over retrieval-based systems and outperforms general-purpose LLMs of over 400B parameters by up to 9%. Our results demonstrate the promise of reasoning-driven approaches for automating systematic evidence synthesis.
Problem

Research questions and friction points this paper is trying to address.

Extracting numeric evidence from clinical trial papers
Determining study-level conclusions for outcomes and comparisons
Improving accuracy in automated systematic evidence synthesis
Innovation

Methods, ideas, or system contributions that make the work stand out.

RL-based numeric reasoning for clinical trials
Extracting structured numerical evidence for conclusions
Domain knowledge logic for outcome-specific inference
🔎 Similar Papers
No similar papers found.
M
Massimiliano Pronesti
IBM Research Europe - Ireland
Michela Lorandi
Michela Lorandi
Dublin City University, ADAPT centre
NLPDialogue SystemDeep Learning
P
Paul Flanagan
Dublin City University
O
Oisin Redmon
Dublin City University
Anya Belz
Anya Belz
Professor of Computer Science, ADAPT Research Centre, Dublin City University, Ireland
Natural Language GenerationAINatural Language ProcessingEvaluationReproducibility
Y
Yufang Hou
IT:U Interdisciplinary Transformation University Austria