Beyond Idealized Patients: Evaluating LLMs under Challenging Patient Behaviors in Medical Consultations

📅 2026-03-31
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Current evaluations of medical large language models predominantly rely on idealized patient queries, failing to capture how real-world complexities—such as ambiguous, contradictory, or misleading patient statements—affect model safety. This work introduces and annotates four clinically prevalent challenging patient behaviors: information inconsistency, factual errors, self-diagnosis, and treatment resistance. Leveraging authentic medical dialogues, the authors reconstruct multi-turn interactions to establish CPB-Bench, the first bilingual (Chinese–English) evaluation benchmark specifically targeting these scenarios. Systematic assessment of leading open- and closed-source models reveals consistent safety failures when handling irrational or conflicting inputs, and demonstrates that existing mitigation strategies are largely ineffective—and sometimes lead to overcorrection.
📝 Abstract
Large language models (LLMs) are increasingly used for medical consultation and health information support. In this high-stakes setting, safety depends not only on medical knowledge, but also on how models respond when patient inputs are unclear, inconsistent, or misleading. However, most existing medical LLM evaluations assume idealized and well-posed patient questions, which limits their realism. In this paper, we study challenging patient behaviors that commonly arise in real medical consultations and complicate safe clinical reasoning. We define four clinically grounded categories of such behaviors: information contradiction, factual inaccuracy, self-diagnosis, and care resistance. For each behavior, we specify concrete failure criteria that capture unsafe responses. Building on four existing medical dialogue datasets, we introduce CPB-Bench (Challenging Patient Behaviors Benchmark), a bilingual (English and Chinese) benchmark of 692 multi-turn dialogues annotated with these behaviors. We evaluate a range of open- and closed-source LLMs on their responses to challenging patient utterances. While models perform well overall, we identify consistent, behavior-specific failure patterns, with particular difficulty in handling contradictory or medically implausible patient information. We also study four intervention strategies and find that they yield inconsistent improvements and can introduce unnecessary corrections. We release the dataset and code.
Problem

Research questions and friction points this paper is trying to address.

challenging patient behaviors
medical consultation
LLM safety
realistic evaluation
clinical reasoning
Innovation

Methods, ideas, or system contributions that make the work stand out.

challenging patient behaviors
medical LLM evaluation
CPB-Bench
safety in clinical reasoning
multi-turn dialogue benchmark
🔎 Similar Papers
No similar papers found.
Y
Yahan Li
University of Southern California
X
Xinyi Jie
University of Southern California
W
Wanjia Ruan
University of Southern California
X
Xubei Zhang
University of Southern California
H
Huaijie Zhu
University of Southern California
Y
Yicheng Gao
University of Southern California
C
Chaohao Du
University of Southern California
Ruishan Liu
Ruishan Liu
University of Southern California
machine learningcomputational healthcomputational biology