Vision-Language Models Encode Clinical Guidelines for Concept-Based Medical Reasoning

📅 2026-03-09
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the limited reliability of conventional concept bottleneck models in medical image analysis, which stems from their inability to incorporate clinical guidelines and expert knowledge—particularly in complex cases. To bridge this gap, the authors propose MedCBR, a novel framework that integrates clinical guideline texts with concept bottleneck modeling for the first time. MedCBR jointly aligns image features, medical concepts, and pathological diagnoses through multitask learning and generates structured, guideline-compliant explanations. The approach combines vision–language models with a structured reasoning module, employing multimodal contrastive alignment, concept supervision, and diagnostic classification in a co-training scheme. Evaluated on ultrasound and mammography datasets, MedCBR achieves AUROC scores of 94.2% and 84.0%, respectively, and attains 86.1% accuracy on a non-medical benchmark, significantly outperforming baseline methods while enhancing both reliability and interpretability in medical AI.

Technology Category

Application Category

📝 Abstract
Concept Bottleneck Models (CBMs) are a prominent framework for interpretable AI that map learned visual features to a set of meaningful concepts for task-specific downstream predictions. Their sequential structure enhances transparency by connecting model predictions to the underlying concepts that support them. In medical imaging, where transparency is essential, CBMs offer an appealing foundation for explainable model design. However, discrete concept representations often overlook broader clinical context such as diagnostic guidelines and expert heuristics, reducing reliability in complex cases. We propose MedCBR, a concept-based reasoning framework that integrates clinical guidelines with vision-language and reasoning models. Labeled clinical descriptors are transformed into guideline-conformant text, and a concept-based model is trained with a multitask objective combining multimodal contrastive alignment, concept supervision, and diagnostic classification to jointly ground image features, concepts, and pathology. A reasoning model then converts these predictions into structured clinical narratives that explain the diagnosis, emulating expert reasoning based on established guidelines. MedCBR achieves superior diagnostic and concept-level performance, with AUROCs of 94.2% on ultrasound and 84.0% on mammography. Further experiments on non-medical datasets achieve 86.1% accuracy. Our framework enhances interpretability and forms an end-to-end bridge from medical image analysis to decision-making.
Problem

Research questions and friction points this paper is trying to address.

Concept Bottleneck Models
clinical guidelines
medical imaging
interpretable AI
concept representation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Concept Bottleneck Models
Vision-Language Models
Clinical Guidelines
Interpretable AI
Medical Reasoning
🔎 Similar Papers
No similar papers found.