🤖 AI Summary
Medical image segmentation lacks effective support for natural language instructions commonly used by clinicians; existing methods predominantly rely on bounding-box or point-based prompts and fail to model free-text prompts robustly. To address this, we propose the first natural language–driven segmentation framework for medical imaging. Our approach introduces a Retrieval-Augmented Generation (RAG)–enhanced, anatomy-aware text prompt generator, a novel multimodal FLanS model, and a symmetry-aware normalization module to mitigate scanner orientation variability and anatomical ambiguity. The framework is jointly trained across seven public datasets (>100K images) to achieve deep language–image alignment. Extensive experiments demonstrate that our method consistently outperforms state-of-the-art approaches both in-domain and cross-domain, with significant improvements in linguistic understanding robustness, anatomical structure mapping accuracy, and segmentation performance.
📝 Abstract
Medical imaging is crucial for diagnosing a patient's health condition, and accurate segmentation of these images is essential for isolating regions of interest to ensure precise diagnosis and treatment planning. Existing methods primarily rely on bounding boxes or point-based prompts, while few have explored text-related prompts, despite clinicians often describing their observations and instructions in natural language. To address this gap, we first propose a RAG-based free-form text prompt generator, that leverages the domain corpus to generate diverse and realistic descriptions. Then, we introduce FLanS, a novel medical image segmentation model that handles various free-form text prompts, including professional anatomy-informed queries, anatomy-agnostic position-driven queries, and anatomy-agnostic size-driven queries. Additionally, our model also incorporates a symmetry-aware canonicalization module to ensure consistent, accurate segmentations across varying scan orientations and reduce confusion between the anatomical position of an organ and its appearance in the scan. FLanS is trained on a large-scale dataset of over 100k medical images from 7 public datasets. Comprehensive experiments demonstrate the model's superior language understanding and segmentation precision, along with a deep comprehension of the relationship between them, outperforming SOTA baselines on both in-domain and out-of-domain datasets.