Dissecting Clinical Reasoning in Language Models: A Comparative Study of Prompts and Model Adaptation Strategies

📅 2025-07-05
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Clinical natural language inference (NLI) demands multi-step reasoning, yet small language models (SLMs) suffer from limited reasoning depth and output unreliability. Method: We systematically design four abstraction-level prompt structures to scaffold clinical NLI reasoning and distill inference knowledge from state-of-the-art models; these prompts guide LoRA fine-tuning of a 4B-parameter SLM, evaluated on benchmarks including NLI4CT. Contribution/Results: Prompt structure dominates performance—accounting for 44% of macro-F1 variance—while LoRA yields +8–+12 F1 gain. The optimized model achieves >97% output consistency and 92.9% of GPT-4o-mini’s performance. Notably, 75% of variants generalize across clinical NLI datasets. This work is the first to empirically validate that “high-quality prompting + lightweight fine-tuning” enables compact models to match large-model efficacy in clinical reasoning. It further proposes a novel evaluation paradigm stratified by reasoning type—e.g., deductive, abductive, or comparative—to better assess clinical NLI capability.

Technology Category

Application Category

📝 Abstract
Recent works on large language models (LLMs) have demonstrated the impact of prompting strategies and fine-tuning techniques on their reasoning capabilities. Yet, their effectiveness on clinical natural language inference (NLI) remains underexplored. This study presents the first controlled evaluation of how prompt structure and efficient fine-tuning jointly shape model performance in clinical NLI. We inspect four classes of prompting strategies to elicit reasoning in LLMs at different levels of abstraction, and evaluate their impact on a range of clinically motivated reasoning types. For each prompting strategy, we construct high-quality demonstrations using a frontier model to distil multi-step reasoning capabilities into smaller models (4B parameters) via Low-Rank Adaptation (LoRA). Across different language models fine-tuned on the NLI4CT benchmark, we found that prompt type alone accounts for up to 44% of the variance in macro-F1. Moreover, LoRA fine-tuning yields consistent gains of +8 to 12 F1, raises output alignment above 97%, and narrows the performance gap to GPT-4o-mini to within 7.1%. Additional experiments on reasoning generalisation reveal that LoRA improves performance in 75% of the models on MedNLI and TREC Clinical Trials Track. Overall, these findings demonstrate that (i) prompt structure is a primary driver of clinical reasoning performance, (ii) compact models equipped with strong prompts and LoRA can rival frontier-scale systems, and (iii) reasoning-type-aware evaluation is essential to uncover prompt-induced trade-offs. Our results highlight the promise of combining prompt design and lightweight adaptation for more efficient and trustworthy clinical NLP systems, providing insights on the strengths and limitations of widely adopted prompting and parameter-efficient techniques in highly specialised domains.
Problem

Research questions and friction points this paper is trying to address.

Evaluates prompt impact on clinical reasoning in LLMs
Assesses LoRA fine-tuning for clinical NLI performance
Compares reasoning types across specialized medical NLP tasks
Innovation

Methods, ideas, or system contributions that make the work stand out.

Evaluates prompt structure impact on clinical NLI
Uses LoRA for efficient fine-tuning of models
Combines strong prompts and LoRA for performance
🔎 Similar Papers
No similar papers found.
Maël Jullien
Maël Jullien
The University of Manchester
NLPNLI
Marco Valentino
Marco Valentino
University of Sheffield
Natural Language ProcessingNeurosymbolic AIExplanation
Leonardo Ranaldi
Leonardo Ranaldi
University of Edinburgh
Natural Language ProcessingMachine LearningArtificial Intelligence
A
André Freitas
Department of Computer Science, University of Manchester, UK; National Biomarker Centre, CRUK-MI, University of Manchester, UK; Idiap Research Institute, Switzerland