🤖 AI Summary
To address the challenges of extracting numerical evidence and inferring outcome-specific conclusions in systematic reviews, this paper pioneers modeling clinical conclusion inference as a quantitative reasoning task. We propose a reinforcement learning (RL)-based numerical extraction framework that integrates medical knowledge–driven logical reasoning. The framework jointly performs key numerical extraction (e.g., event counts, standard deviations), effect size estimation, supervised fine-tuning, and RL-based optimization, and introduces an interpretable, domain-aligned reward model grounded in clinical relevance. Evaluated on the CochraneForest benchmark, our method achieves an F1 score of 89.3%, outperforming strong baselines by 21 percentage points and surpassing a 400B-parameter general-purpose large language model by 9 points. This work demonstrates a breakthrough in high-accuracy, highly interpretable clinical reasoning using a compact, domain-specialized model.
📝 Abstract
Systematic reviews in medicine play a critical role in evidence-based decision-making by aggregating findings from multiple studies. A central bottleneck in automating this process is extracting numeric evidence and determining study-level conclusions for specific outcomes and comparisons. Prior work has framed this problem as a textual inference task by retrieving relevant content fragments and inferring conclusions from them. However, such approaches often rely on shallow textual cues and fail to capture the underlying numeric reasoning behind expert assessments. In this work, we conceptualise the problem as one of quantitative reasoning. Rather than inferring conclusions from surface text, we extract structured numerical evidence (e.g., event counts or standard deviations) and apply domain knowledge informed logic to derive outcome-specific conclusions. We develop a numeric reasoning system composed of a numeric data extraction model and an effect estimate component, enabling more accurate and interpretable inference aligned with the domain expert principles. We train the numeric data extraction model using different strategies, including supervised fine-tuning (SFT) and reinforcement learning (RL) with a new value reward model. When evaluated on the CochraneForest benchmark, our best-performing approach -- using RL to train a small-scale number extraction model -- yields up to a 21% absolute improvement in F1 score over retrieval-based systems and outperforms general-purpose LLMs of over 400B parameters by up to 9%. Our results demonstrate the promise of reasoning-driven approaches for automating systematic evidence synthesis.