🤖 AI Summary
This work identifies a critical limitation in vision-language models (VLMs): severely degraded generalization under rare anatomical variations—stemming from strong prior biases toward “typical” anatomy. To address this, the authors introduce the novel concept of *natural adversarial anatomy* and construct AdversarialAnatomyBench, the first cross-modal, multi-anatomical-region benchmark for rare anatomical variants. Evaluating 22 state-of-the-art VLMs on fundamental medical perception tasks, they find that average accuracy plummets to 29% on atypical anatomy—down 45 percentage points from 74% on typical anatomy—with even the best-performing model suffering 41–51% relative performance loss. Standard mitigation strategies—including bias-aware prompting and test-time inference—yield negligible improvements. This study provides the first quantitative characterization and root-cause attribution of VLMs’ clinical robustness deficits, establishing a foundational evaluation paradigm and actionable directions for developing trustworthy medical AI.
📝 Abstract
Vision-language models are increasingly integrated into clinical workflows. However, existing benchmarks primarily assess performance on common anatomical presentations and fail to capture the challenges posed by rare variants. To address this gap, we introduce AdversarialAnatomyBench, the first benchmark comprising naturally occurring rare anatomical variants across diverse imaging modalities and anatomical regions. We call such variants that violate learned priors about "typical" human anatomy natural adversarial anatomy. Benchmarking 22 state-of-the-art VLMs with AdversarialAnatomyBench yielded three key insights. First, when queried with basic medical perception tasks, mean accuracy dropped from 74% on typical to 29% on atypical anatomy. Even the best-performing models, GPT-5, Gemini 2.5 Pro, and Llama 4 Maverick, showed performance drops of 41-51%. Second, model errors closely mirrored expected anatomical biases. Third, neither model scaling nor interventions, including bias-aware prompting and test-time reasoning, resolved these issues. These findings highlight a critical and previously unquantified limitation in current VLM: their poor generalization to rare anatomical presentations. AdversarialAnatomyBench provides a foundation for systematically measuring and mitigating anatomical bias in multimodal medical AI systems.