🤖 AI Summary
Current large language models (LLMs) in healthcare lack reliable abstention mechanisms under clinical uncertainty, and reliance on accuracy alone is insufficient to ensure safety in high-stakes scenarios. This work proposes MedAbstain—the first unified evaluation framework for abstention in medical multiple-choice question answering (MCQA)—which systematically assesses the abstention capabilities of both open- and closed-source LLMs through explicit abstention options, adversarial question perturbations, and conformal prediction. The study reveals that highly accurate models often fail to abstain appropriately when uncertain; in contrast, explicit abstention mechanisms substantially improve safe abstention behavior, whereas merely scaling model size or employing advanced prompting strategies yields limited gains. These findings underscore MedAbstain’s contribution to moving beyond conventional evaluation paradigms in medical AI.
📝 Abstract
Current evaluation of large language models (LLMs) overwhelmingly prioritizes accuracy; however, in real-world and safety-critical applications, the ability to abstain when uncertain is equally vital for trustworthy deployment. We introduce MedAbstain, a unified benchmark and evaluation protocol for abstention in medical multiple-choice question answering (MCQA) -- a discrete-choice setting that generalizes to agentic action selection -- integrating conformal prediction, adversarial question perturbations, and explicit abstention options. Our systematic evaluation of both open- and closed-source LLMs reveals that even state-of-the-art, high-accuracy models often fail to abstain with uncertain. Notably, providing explicit abstention options consistently increases model uncertainty and safer abstention, far more than input perturbations, while scaling model size or advanced prompting brings little improvement. These findings highlight the central role of abstention mechanisms for trustworthy LLM deployment and offer practical guidance for improving safety in high-stakes applications.