Language-guided Scale-aware MedSegmentor for Lesion Segmentation in Medical Imaging

📅 2024-08-30
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Clinical lesion segmentation must accommodate clinicians’ personalized requirements for specific lesions, yet existing models lack text-driven flexible localization capability. To address this, we propose a novel task—Referring Lesion Segmentation (RLS)—enabling precise segmentation of target lesions in medical images guided by natural language descriptions. Methodologically, we introduce RefHL-Seg, the first referring expression dataset for hepatic lesions, and propose a language-guided scale-aware framework featuring a Scale-Aware Vision-Language Attention module and a full-scale decoder. This design integrates multi-scale convolutional features, cross-modal alignment, and contrastive learning loss. Our approach achieves state-of-the-art performance on both RLS and conventional lesion segmentation benchmarks, while reducing computational overhead and demonstrating strong generalization across diverse lesion types and imaging modalities.

Technology Category

Application Category

📝 Abstract
In clinical practice, segmenting specific lesions based on the needs of physicians can significantly enhance diagnostic accuracy and treatment efficiency. However, conventional lesion segmentation models lack the flexibility to distinguish lesions according to specific requirements. Given the practical advantages of using text as guidance, we propose a novel model, Language-guided Scale-aware MedSegmentor (LSMS), which segments target lesions in medical images based on given textual expressions. We define this as a new task termed Referring Lesion Segmentation (RLS). To address the lack of suitable benchmarks for RLS, we construct a vision-language medical dataset named Reference Hepatic Lesion Segmentation (RefHL-Seg). LSMS incorporates two key designs: (i) Scale-Aware Vision-Language attention module, which performs visual feature extraction and vision-language alignment in parallel. By leveraging diverse convolutional kernels, this module acquires rich visual representations and interacts closely with linguistic features, thereby enhancing the model's capacity for precise object localization. (ii) Full-Scale Decoder, which globally models multi-modal features across multiple scales and captures complementary information between them to accurately delineate lesion boundaries. Additionally, we design a specialized loss function comprising both segmentation loss and vision-language contrastive loss to better optimize cross-modal learning. We validate the performance of LSMS on RLS as well as on conventional lesion segmentation tasks across multiple datasets. Our LSMS consistently achieves superior performance with significantly lower computational cost. Code and datasets will be released.
Problem

Research questions and friction points this paper is trying to address.

Segmenting lesions based on physician's textual guidance
Lack of flexible lesion segmentation models for specific requirements
Absence of benchmarks for Referring Lesion Segmentation (RLS)
Innovation

Methods, ideas, or system contributions that make the work stand out.

Language-guided Scale-aware MedSegmentor for lesion segmentation
Scale-Aware Vision-Language attention module
Full-Scale Decoder for multi-modal feature modeling
🔎 Similar Papers
No similar papers found.
Shuyi Ouyang
Shuyi Ouyang
Zhejiang University
J
Jinyang Zhang
Zhejiang University
X
Xiangye Lin
Zhejiang University
X
Xilai Wang
Zhejiang University
Q
Qingqing Chen
Sir Run Run Shaw Hospital
Yen-Wei Chen
Yen-Wei Chen
Ritsumeikan University
image processingpattern recognitionmedical image analysis
L
Lanfen Lin
Zhejiang University