🤖 AI Summary
This work addresses the high sensitivity of clinical large language models (LLMs) to prompt phrasing, a challenge often overlooked by existing approaches that treat prompt accuracy and stability in isolation. The study proposes a novel bi-objective optimization framework that explicitly incorporates prompt stability alongside accuracy as a joint optimization target. By introducing flip rate as a metric for prompt sensitivity and integrating calibration analysis with selective prediction, the authors develop an iterative prompt optimization algorithm. Evaluated on MedAlign applicability assessment and multiple sclerosis subtype extraction tasks across various open- and closed-source LLMs, the method substantially reduces prompt flip rates while maintaining near-optimal accuracy. These results demonstrate that high accuracy does not necessarily imply high stability, highlighting the critical need to jointly optimize both dimensions for robust clinical deployment.
📝 Abstract
Large language models used for clinical abstraction are sensitive to prompt wording, yet most work treats prompts as fixed and studies uncertainty in isolation. We argue these should be treated jointly. Across two clinical tasks (MedAlign applicability/correctness and MS subtype abstraction) and multiple open and proprietary models, we measure prompt sensitivity via flip rates and relate it to calibration and selective prediction. We find that higher accuracy does not guarantee prompt stability, and that models can appear well-calibrated yet remain fragile to paraphrases. We propose a dual-objective prompt optimization loop that jointly targets accuracy and stability, showing that explicitly including a stability term reduces flip rates across tasks and models, sometimes at modest accuracy cost. Our results suggest prompt sensitivity should be an explicit objective when validating clinical LLM systems.