🤖 AI Summary
Medical chatbots must maintain recommendation consistency even when queries include non-clinical factors such as demographic attributes; however, current large language models (LLMs) frequently exhibit hallucination, omission, and bias in such contexts.
Method: We systematically investigate their failure boundaries by (1) developing a multidimensional clinical case generation pipeline that integrates patient demographics, medical history, disease type, and linguistic style; and (2) proposing a multi-LLM-as-a-judge evaluation framework that employs autonomous agent workflows to assess consistency, bias, and factual errors.
Contribution/Results: We find extremely low inter-annotator agreement among LLM evaluators (mean Cohen’s κ = 0.118); only specific LLM combinations detect statistically significant differences—highlighting the risk of misleading conclusions from single-evaluator assessments. This work is the first to demonstrate severe generalizability limitations in current medical AI evaluation methodologies and underscores the necessity of reporting cross-LLM consistency metrics for trustworthy medical AI assessment.
📝 Abstract
Recent research has shown that hallucinations, omissions, and biases are prevalent in everyday use-cases of LLMs. However, chatbots used in medical contexts must provide consistent advice in situations where non-medical factors are involved, such as when demographic information is present. In order to understand the conditions under which medical chatbots fail to perform as expected, we develop an infrastructure that 1) automatically generates queries to probe LLMs and 2) evaluates answers to these queries using multiple LLM-as-a-judge setups and prompts. For 1), our prompt creation pipeline samples the space of patient demographics, histories, disorders, and writing styles to create realistic questions that we subsequently use to prompt LLMs. In 2), our evaluation pipeline provides hallucination and omission detection using LLM-as-a-judge as well as agentic workflows, in addition to LLM-as-a-judge treatment category detectors. As a baseline study, we perform two case studies on inter-LLM agreement and the impact of varying the answering and evaluation LLMs. We find that LLM annotators exhibit low agreement scores (average Cohen's Kappa $kappa=0.118$), and only specific (answering, evaluation) LLM pairs yield statistically significant differences across writing styles, genders, and races. We recommend that studies using LLM evaluation use multiple LLMs as evaluators in order to avoid arriving at statistically significant but non-generalizable results, particularly in the absence of ground-truth data. We also suggest publishing inter-LLM agreement metrics for transparency. Our code and dataset are available here: https://github.com/BBN-E/medic-neurips-2025-demo.