🤖 AI Summary
This study addresses the critical issue of hallucinations in medical large language models (LLMs) under conditions of insufficient or conflicting evidence, which poses significant risks to clinical safety. To tackle this challenge, the authors propose ECRT, a two-stage white-box detection framework, and introduce RETINA-SAFE, a novel benchmark dataset that, for the first time, categorizes evidence relationships into three fine-grained tasks: consistent, conflicting, and missing. By analyzing internal representations under contextual (CTX) and non-contextual (NOCTX) conditions, detecting logit shifts, and employing class-balanced training, the framework enables interpretable risk triage and attribution of hallucination subtypes. Experimental results demonstrate that Stage-1 of ECRT achieves a balanced accuracy improvement of 0.15–0.19 over external uncertainty and self-consistency baselines, and outperforms the strongest supervised baseline by 0.02–0.07.
📝 Abstract
Hallucinations in medical large language models (LLMs) remain a safety-critical issue, particularly when available evidence is insufficient or conflicting. We study this problem in diabetic retinopathy (DR) decision settings and introduce RETINA-SAFE, an evidence-grounded benchmark aligned with retinal grading records, comprising 12,522 samples. RETINA-SAFE is organized into three evidence-relation tasks: E-Align (evidence-consistent), E-Conflict (evidence-conflicting), and E-Gap (evidence-insufficient). We further propose ECRT (Evidence-Conditioned Risk Triage), a two-stage white-box detection framework: Stage 1 performs Safe/Unsafe risk triage, and Stage 2 refines unsafe cases into contradiction-driven versus evidence-gap risks. ECRT leverages internal representation and logit shifts under CTX/NOCTX conditions, with class-balanced training for robust learning. Under evidence-grouped (not patient-disjoint) splits across multiple backbones, ECRT provides strong Stage-1 risk triage and explicit subtype attribution, improves Stage-1 balanced accuracy by +0.15 to +0.19 over external uncertainty and self-consistency baselines and by +0.02 to +0.07 over the strongest adapted supervised baseline, and consistently exceeds a single-stage white-box ablation on Stage-1 balanced accuracy. These findings support white-box internal signals grounded in retinal evidence as a practical route to interpretable medical LLM risk triage.