Knowing When to Abstain: Medical LLMs Under Clinical Uncertainty

📅 2026-01-18
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Current large language models (LLMs) in healthcare lack reliable abstention mechanisms under clinical uncertainty, and reliance on accuracy alone is insufficient to ensure safety in high-stakes scenarios. This work proposes MedAbstain—the first unified evaluation framework for abstention in medical multiple-choice question answering (MCQA)—which systematically assesses the abstention capabilities of both open- and closed-source LLMs through explicit abstention options, adversarial question perturbations, and conformal prediction. The study reveals that highly accurate models often fail to abstain appropriately when uncertain; in contrast, explicit abstention mechanisms substantially improve safe abstention behavior, whereas merely scaling model size or employing advanced prompting strategies yields limited gains. These findings underscore MedAbstain’s contribution to moving beyond conventional evaluation paradigms in medical AI.

Technology Category

Application Category

📝 Abstract
Current evaluation of large language models (LLMs) overwhelmingly prioritizes accuracy; however, in real-world and safety-critical applications, the ability to abstain when uncertain is equally vital for trustworthy deployment. We introduce MedAbstain, a unified benchmark and evaluation protocol for abstention in medical multiple-choice question answering (MCQA) -- a discrete-choice setting that generalizes to agentic action selection -- integrating conformal prediction, adversarial question perturbations, and explicit abstention options. Our systematic evaluation of both open- and closed-source LLMs reveals that even state-of-the-art, high-accuracy models often fail to abstain with uncertain. Notably, providing explicit abstention options consistently increases model uncertainty and safer abstention, far more than input perturbations, while scaling model size or advanced prompting brings little improvement. These findings highlight the central role of abstention mechanisms for trustworthy LLM deployment and offer practical guidance for improving safety in high-stakes applications.
Problem

Research questions and friction points this paper is trying to address.

abstention
clinical uncertainty
medical LLMs
trustworthy AI
safety-critical applications
Innovation

Methods, ideas, or system contributions that make the work stand out.

abstention
conformal prediction
medical LLMs
uncertainty quantification
adversarial perturbation
🔎 Similar Papers
No similar papers found.
S
Sravanthi Machcha
Manning College of Information and Computer Sciences, UMass Amherst, MA, USA
S
Sushrita Yerra
Manning College of Information and Computer Sciences, UMass Amherst, MA, USA
S
Sahil Gupta
Manning College of Information and Computer Sciences, UMass Amherst, MA, USA
Aishwarya Sahoo
Aishwarya Sahoo
University of Massachusetts, Amherst
Natural Language ProcessingLarge Language ModelsReinforcement LearningAI Safety & Alignment
S
Sharmin Sultana
Center for Healthcare Organization and Implementation Research, VA Bedford Health Care; Miner School of Computer and Information Sciences, UMass Lowell, MA, USA
H
Hong Yu
Manning College of Information and Computer Sciences, UMass Amherst, MA, USA; Center for Healthcare Organization and Implementation Research, VA Bedford Health Care; Miner School of Computer and Information Sciences, UMass Lowell, MA, USA
Zonghai Yao
Zonghai Yao
Umass Amherst
Medical-LLMMulti-agent AI HospitalClinical ReasoningSynthetic DataPatient Education