🤖 AI Summary
This paper addresses the challenge of precisely localizing abnormal regions in medical images based on textual descriptions. We propose K2Sight, a knowledge-guided vision-language alignment framework that decomposes clinical concepts into interpretable visual attributes—such as shape, density, and anatomical location—and leverages medical ontologies to extract structured attributes, which are then converted into instruction-style prompts. This explicitly models the mapping between domain knowledge and image space. Employing lightweight vision-language models (0.23B/2B parameters), K2Sight integrates attribute-aware prompt engineering with region-text fine-grained alignment training. With only 1.5% annotated data, it achieves up to a 9.82% improvement in mAP₅₀, matching or surpassing the performance of 7B-parameter medical VLMs. Our core contribution is a novel, interpretable, low-resource, high-accuracy paradigm for knowledge-to-vision reasoning.
📝 Abstract
In this work, we address the problem of grounding abnormalities in medical images, where the goal is to localize clinical findings based on textual descriptions. While generalist Vision-Language Models (VLMs) excel in natural grounding tasks, they often struggle in the medical domain due to rare, compositional, and domain-specific terms that are poorly aligned with visual patterns. Specialized medical VLMs address this challenge via large-scale domain pretraining, but at the cost of substantial annotation and computational resources. To overcome these limitations, we propose extbf{Knowledge to Sight (K2Sight)}, a framework that introduces structured semantic supervision by decomposing clinical concepts into interpretable visual attributes, such as shape, density, and anatomical location. These attributes are distilled from domain ontologies and encoded into concise instruction-style prompts, which guide region-text alignment during training. Unlike conventional report-level supervision, our approach explicitly bridges domain knowledge and spatial structure, enabling data-efficient training of compact models. We train compact models with 0.23B and 2B parameters using only 1.5% of the data required by state-of-the-art medical VLMs. Despite their small size and limited training data, these models achieve performance on par with or better than 7B+ medical VLMs, with up to 9.82% improvement in $mAP_{50}$. Code and models: href{https://lijunrio.github.io/K2Sight/}{ extcolor{SOTAPink}{https://lijunrio.github.io/K2Sight/}}.