🤖 AI Summary
Large language models (LLMs) exhibit critical clinical unreliability when responding to real-world cancer patient queries containing erroneous presuppositions—a gap unaddressed by existing medical evaluation benchmarks.
Method: To address the lack of adversarial medical contexts in current assessments, we introduce Cancer-Myth, the first expert-validated adversarial dataset (585 items) for evaluating presupposition identification and correction, benchmarking state-of-the-art models including GPT-4o, Gemini-1.Pro, and Claude-3.5-Sonnet.
Contribution/Results: All models corrected erroneous presuppositions in fewer than 30% of cases; notably, GPT-4-Turbo achieved high overall response quality (4.13/5) but failed almost entirely on presupposition recognition. Multi-model comparison—rigorously validated via oncology expert review and medical agent analysis—reveals systematic neglect or reinforcement of false medical premises. This work quantifies, for the first time, the “presupposition blindness” flaw in clinical LLM dialogues, establishing a novel benchmark and methodology for safety evaluation and trustworthiness enhancement of AI-powered healthcare conversational systems.
📝 Abstract
Cancer patients are increasingly turning to large language models (LLMs) as a new form of internet search for medical information, making it critical to assess how well these models handle complex, personalized questions. However, current medical benchmarks focus on medical exams or consumer-searched questions and do not evaluate LLMs on real patient questions with detailed clinical contexts. In this paper, we first evaluate LLMs on cancer-related questions drawn from real patients, reviewed by three hematology oncology physicians. While responses are generally accurate, with GPT-4-Turbo scoring 4.13 out of 5, the models frequently fail to recognize or address false presuppositions in the questions-posing risks to safe medical decision-making. To study this limitation systematically, we introduce Cancer-Myth, an expert-verified adversarial dataset of 585 cancer-related questions with false presuppositions. On this benchmark, no frontier LLM -- including GPT-4o, Gemini-1.Pro, and Claude-3.5-Sonnet -- corrects these false presuppositions more than 30% of the time. Even advanced medical agentic methods do not prevent LLMs from ignoring false presuppositions. These findings expose a critical gap in the clinical reliability of LLMs and underscore the need for more robust safeguards in medical AI systems.