Knowledge to Sight: Reasoning over Visual Attributes via Knowledge Decomposition for Abnormality Grounding

📅 2025-08-06
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This paper addresses the challenge of precisely localizing abnormal regions in medical images based on textual descriptions. We propose K2Sight, a knowledge-guided vision-language alignment framework that decomposes clinical concepts into interpretable visual attributes—such as shape, density, and anatomical location—and leverages medical ontologies to extract structured attributes, which are then converted into instruction-style prompts. This explicitly models the mapping between domain knowledge and image space. Employing lightweight vision-language models (0.23B/2B parameters), K2Sight integrates attribute-aware prompt engineering with region-text fine-grained alignment training. With only 1.5% annotated data, it achieves up to a 9.82% improvement in mAP₅₀, matching or surpassing the performance of 7B-parameter medical VLMs. Our core contribution is a novel, interpretable, low-resource, high-accuracy paradigm for knowledge-to-vision reasoning.

Technology Category

Application Category

📝 Abstract
In this work, we address the problem of grounding abnormalities in medical images, where the goal is to localize clinical findings based on textual descriptions. While generalist Vision-Language Models (VLMs) excel in natural grounding tasks, they often struggle in the medical domain due to rare, compositional, and domain-specific terms that are poorly aligned with visual patterns. Specialized medical VLMs address this challenge via large-scale domain pretraining, but at the cost of substantial annotation and computational resources. To overcome these limitations, we propose extbf{Knowledge to Sight (K2Sight)}, a framework that introduces structured semantic supervision by decomposing clinical concepts into interpretable visual attributes, such as shape, density, and anatomical location. These attributes are distilled from domain ontologies and encoded into concise instruction-style prompts, which guide region-text alignment during training. Unlike conventional report-level supervision, our approach explicitly bridges domain knowledge and spatial structure, enabling data-efficient training of compact models. We train compact models with 0.23B and 2B parameters using only 1.5% of the data required by state-of-the-art medical VLMs. Despite their small size and limited training data, these models achieve performance on par with or better than 7B+ medical VLMs, with up to 9.82% improvement in $mAP_{50}$. Code and models: href{https://lijunrio.github.io/K2Sight/}{ extcolor{SOTAPink}{https://lijunrio.github.io/K2Sight/}}.
Problem

Research questions and friction points this paper is trying to address.

Grounding abnormalities in medical images using textual descriptions
Overcoming rare and domain-specific terms in medical VLMs
Training compact models efficiently with structured semantic supervision
Innovation

Methods, ideas, or system contributions that make the work stand out.

Decomposes clinical concepts into visual attributes
Uses concise instruction-style prompts for training
Trains compact models with minimal data efficiently
🔎 Similar Papers
No similar papers found.
J
Jun Li
Technical University of Munich
Che Liu
Che Liu
Imperial College London
Multimodal learningAI4Medicine
W
Wenjia Bai
Imperial College London
M
Mingxuan Liu
University of Trento
Rossella Arcucci
Rossella Arcucci
Associate Professor, Imperial College London
AI4GoodData LearningData AssimilationMachine LearningDeep Learning
Cosmin I. Bercea
Cosmin I. Bercea
Technical University of Munich
Computer VisionMultimodal LearningGenerative AIAnomaly DetectionMedical Image Analysis
J
Julia A. Schnabel
Technical University of Munich, Helmholtz AI and Helmholtz Munich, King’s College London