🤖 AI Summary
Existing text-prompted segmentation methods exhibit poor generalization in real-world clinical CT segmentation, hindered by fixed vocabulary constraints and scarcity of voxel-level annotations. To address this, we propose the first open-vocabulary, text-driven 3D segmentation framework tailored for CT imaging. Our approach comprises three core components: (1) constructing CT-RATE, a large-scale radiology report–CT image pair dataset; (2) introducing organ-level, multi-granularity contrastive learning to overcome lexical limitations of text prompts; and (3) designing a hybrid 3D CNN-ViT visual encoder, LLM-assisted report parsing, and text-voxel alignment pretraining. The framework enables zero-shot 3D localization and segmentation of organs or lesions directly from arbitrary clinical descriptions (e.g., “metastatic tumor in the right hepatic lobe”) without fine-tuning. Evaluated on nine public CT datasets, it significantly outperforms baselines including SAM and CLIP-driven methods, while demonstrating robust generalization to unseen organ and lesion categories. Code, models, and data are fully open-sourced.
📝 Abstract
Computed tomography (CT) is extensively used for accurate visualization and segmentation of organs and lesions. While deep learning models such as convolutional neural networks (CNNs) and vision transformers (ViTs) have significantly improved CT image analysis, their performance often declines when applied to diverse, real-world clinical data. Although foundation models offer a broader and more adaptable solution, their potential is limited due to the challenge of obtaining large-scale, voxel-level annotations for medical images. In response to these challenges, prompting-based models using visual or text prompts have emerged. Visual-prompting methods, such as the Segment Anything Model (SAM), still require significant manual input and can introduce ambiguity when applied to clinical scenarios. Instead, foundation models that use text prompts offer a more versatile and clinically relevant approach. Notably, current text-prompt models, such as the CLIP-Driven Universal Model, are limited to text prompts already encountered during training and struggle to process the complex and diverse scenarios of real-world clinical applications. Instead of fine-tuning models trained from natural imaging, we propose OpenVocabCT, a vision-language model pretrained on large-scale 3D CT images for universal text-driven segmentation. Using the large-scale CT-RATE dataset, we decompose the diagnostic reports into fine-grained, organ-level descriptions using large language models for multi-granular contrastive learning. We evaluate our OpenVocabCT on downstream segmentation tasks across nine public datasets for organ and tumor segmentation, demonstrating the superior performance of our model compared to existing methods. All code, datasets, and models will be publicly released at https://github.com/ricklisz/OpenVocabCT.