🤖 AI Summary
Clinical lesion segmentation must accommodate clinicians’ personalized requirements for specific lesions, yet existing models lack text-driven flexible localization capability. To address this, we propose a novel task—Referring Lesion Segmentation (RLS)—enabling precise segmentation of target lesions in medical images guided by natural language descriptions. Methodologically, we introduce RefHL-Seg, the first referring expression dataset for hepatic lesions, and propose a language-guided scale-aware framework featuring a Scale-Aware Vision-Language Attention module and a full-scale decoder. This design integrates multi-scale convolutional features, cross-modal alignment, and contrastive learning loss. Our approach achieves state-of-the-art performance on both RLS and conventional lesion segmentation benchmarks, while reducing computational overhead and demonstrating strong generalization across diverse lesion types and imaging modalities.
📝 Abstract
In clinical practice, segmenting specific lesions based on the needs of physicians can significantly enhance diagnostic accuracy and treatment efficiency. However, conventional lesion segmentation models lack the flexibility to distinguish lesions according to specific requirements. Given the practical advantages of using text as guidance, we propose a novel model, Language-guided Scale-aware MedSegmentor (LSMS), which segments target lesions in medical images based on given textual expressions. We define this as a new task termed Referring Lesion Segmentation (RLS). To address the lack of suitable benchmarks for RLS, we construct a vision-language medical dataset named Reference Hepatic Lesion Segmentation (RefHL-Seg). LSMS incorporates two key designs: (i) Scale-Aware Vision-Language attention module, which performs visual feature extraction and vision-language alignment in parallel. By leveraging diverse convolutional kernels, this module acquires rich visual representations and interacts closely with linguistic features, thereby enhancing the model's capacity for precise object localization. (ii) Full-Scale Decoder, which globally models multi-modal features across multiple scales and captures complementary information between them to accurately delineate lesion boundaries. Additionally, we design a specialized loss function comprising both segmentation loss and vision-language contrastive loss to better optimize cross-modal learning. We validate the performance of LSMS on RLS as well as on conventional lesion segmentation tasks across multiple datasets. Our LSMS consistently achieves superior performance with significantly lower computational cost. Code and datasets will be released.