Enhancing Abnormality Grounding for Vision Language Models with Knowledge Descriptions

📅 2025-03-05
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Addressing the challenges of pathological anomaly detection/localization and visual alignment of abstract medical terminology in medical imaging, this paper proposes a lightweight, knowledge-decomposed prompting paradigm. Methodologically, it decomposes complex medical concepts into visually alignable primitive attributes and common patterns, integrating them with the Florence-2 (0.23B) vision-language architecture to achieve fine-grained vision-text alignment and knowledge-enhanced prompting. Without requiring large-scale annotated data or billion-parameter models, our approach achieves comparable anomaly localization accuracy to 7B-parameter medical vision-language models using only 1.5% of their training data. It significantly improves cross-type generalization—both for seen and unseen anomalies—and enhances clinical interpretability. The core innovations lie in knowledge-driven decoupled modeling of medical terms and visual features, and a low-resource-efficient alignment mechanism.

Technology Category

Application Category

📝 Abstract
Visual Language Models (VLMs) have demonstrated impressive capabilities in visual grounding tasks. However, their effectiveness in the medical domain, particularly for abnormality detection and localization within medical images, remains underexplored. A major challenge is the complex and abstract nature of medical terminology, which makes it difficult to directly associate pathological anomaly terms with their corresponding visual features. In this work, we introduce a novel approach to enhance VLM performance in medical abnormality detection and localization by leveraging decomposed medical knowledge. Instead of directly prompting models to recognize specific abnormalities, we focus on breaking down medical concepts into fundamental attributes and common visual patterns. This strategy promotes a stronger alignment between textual descriptions and visual features, improving both the recognition and localization of abnormalities in medical images.We evaluate our method on the 0.23B Florence-2 base model and demonstrate that it achieves comparable performance in abnormality grounding to significantly larger 7B LLaVA-based medical VLMs, despite being trained on only 1.5% of the data used for such models. Experimental results also demonstrate the effectiveness of our approach in both known and previously unseen abnormalities, suggesting its strong generalization capabilities.
Problem

Research questions and friction points this paper is trying to address.

Improves medical abnormality detection in VLMs
Aligns textual descriptions with visual features
Enhances generalization for unseen abnormalities
Innovation

Methods, ideas, or system contributions that make the work stand out.

Leverages decomposed medical knowledge for VLMs
Breaks down medical concepts into visual patterns
Enhances abnormality detection with minimal data
🔎 Similar Papers
No similar papers found.
J
Jun Li
Technical University of Munich, Germany; Munich Center for Machine Learning, Germany
Che Liu
Che Liu
Imperial College London
Multimodal learningAI4Medicine
W
Wenjia Bai
Imperial College London, UK
Rossella Arcucci
Rossella Arcucci
Associate Professor, Imperial College London
AI4GoodData LearningData AssimilationMachine LearningDeep Learning
Cosmin I. Bercea
Cosmin I. Bercea
Technical University of Munich
Computer VisionMultimodal LearningGenerative AIAnomaly DetectionMedical Image Analysis
J
Julia A. Schnabel
Technical University of Munich, Germany; Munich Center for Machine Learning, Germany; Helmholtz AI and Helmholtz Munich, Germany; King’s College London, UK