🤖 AI Summary
While current vision-language models demonstrate strong performance in chest X-ray classification, their reliance on flat evaluation metrics fails to distinguish between errors with vastly different clinical consequences, often leading to severe abstraction-level misalignments. To address this, this work introduces the concept of “catastrophic abstraction errors” and proposes a hierarchical evaluation framework grounded in medical ontologies. Furthermore, it designs a classification-aware fine-tuning strategy incorporating risk-constrained thresholds and radial embeddings to align model representations with the structural hierarchy of medical knowledge. Experimental results show that the proposed approach reduces the rate of catastrophic abstraction errors to below 2% while maintaining excellent overall performance, thereby significantly enhancing the clinical safety and deployment reliability of the model.
📝 Abstract
Vision-Language Models show strong zero-shot performance for chest X-ray classification, but standard flat metrics fail to distinguish between clinically minor and severe errors. This work investigates how to quantify and mitigate abstraction errors by leveraging medical taxonomies. We benchmark several state-of-the-art VLMs using hierarchical metrics and introduce Catastrophic Abstraction Errors to capture cross-branch mistakes. Our results reveal substantial misalignment of VLMs with clinical taxonomies despite high flat performance. To address this, we propose risk-constrained thresholding and taxonomy-aware fine-tuning with radial embeddings, which reduce severe abstraction errors to below 2 per cent while maintaining competitive performance. These findings highlight the importance of hierarchical evaluation and representation-level alignment for safer and more clinically meaningful deployment of VLMs.