Stability-Aware Prompt Optimization for Clinical Data Abstraction

📅 2026-01-29
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the high sensitivity of clinical large language models (LLMs) to prompt phrasing, a challenge often overlooked by existing approaches that treat prompt accuracy and stability in isolation. The study proposes a novel bi-objective optimization framework that explicitly incorporates prompt stability alongside accuracy as a joint optimization target. By introducing flip rate as a metric for prompt sensitivity and integrating calibration analysis with selective prediction, the authors develop an iterative prompt optimization algorithm. Evaluated on MedAlign applicability assessment and multiple sclerosis subtype extraction tasks across various open- and closed-source LLMs, the method substantially reduces prompt flip rates while maintaining near-optimal accuracy. These results demonstrate that high accuracy does not necessarily imply high stability, highlighting the critical need to jointly optimize both dimensions for robust clinical deployment.

Technology Category

Application Category

📝 Abstract
Large language models used for clinical abstraction are sensitive to prompt wording, yet most work treats prompts as fixed and studies uncertainty in isolation. We argue these should be treated jointly. Across two clinical tasks (MedAlign applicability/correctness and MS subtype abstraction) and multiple open and proprietary models, we measure prompt sensitivity via flip rates and relate it to calibration and selective prediction. We find that higher accuracy does not guarantee prompt stability, and that models can appear well-calibrated yet remain fragile to paraphrases. We propose a dual-objective prompt optimization loop that jointly targets accuracy and stability, showing that explicitly including a stability term reduces flip rates across tasks and models, sometimes at modest accuracy cost. Our results suggest prompt sensitivity should be an explicit objective when validating clinical LLM systems.
Problem

Research questions and friction points this paper is trying to address.

prompt sensitivity
clinical data abstraction
large language models
model stability
calibration
Innovation

Methods, ideas, or system contributions that make the work stand out.

prompt stability
clinical LLM
dual-objective optimization
flip rate
selective prediction
🔎 Similar Papers
No similar papers found.
Arinbjörn Kolbeinsson
Arinbjörn Kolbeinsson
University of Virginia
Machine LearningBiomedical Data ScienceTensor methods
D
Daniel Timbie
Century Health
S
Sajjan Narsinghani
Century Health
S
Sanjay Hariharan
Century Health