Towards Universal Text-driven CT Image Segmentation

📅 2025-03-08
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing text-prompted segmentation methods exhibit poor generalization in real-world clinical CT segmentation, hindered by fixed vocabulary constraints and scarcity of voxel-level annotations. To address this, we propose the first open-vocabulary, text-driven 3D segmentation framework tailored for CT imaging. Our approach comprises three core components: (1) constructing CT-RATE, a large-scale radiology report–CT image pair dataset; (2) introducing organ-level, multi-granularity contrastive learning to overcome lexical limitations of text prompts; and (3) designing a hybrid 3D CNN-ViT visual encoder, LLM-assisted report parsing, and text-voxel alignment pretraining. The framework enables zero-shot 3D localization and segmentation of organs or lesions directly from arbitrary clinical descriptions (e.g., “metastatic tumor in the right hepatic lobe”) without fine-tuning. Evaluated on nine public CT datasets, it significantly outperforms baselines including SAM and CLIP-driven methods, while demonstrating robust generalization to unseen organ and lesion categories. Code, models, and data are fully open-sourced.

Technology Category

Application Category

📝 Abstract
Computed tomography (CT) is extensively used for accurate visualization and segmentation of organs and lesions. While deep learning models such as convolutional neural networks (CNNs) and vision transformers (ViTs) have significantly improved CT image analysis, their performance often declines when applied to diverse, real-world clinical data. Although foundation models offer a broader and more adaptable solution, their potential is limited due to the challenge of obtaining large-scale, voxel-level annotations for medical images. In response to these challenges, prompting-based models using visual or text prompts have emerged. Visual-prompting methods, such as the Segment Anything Model (SAM), still require significant manual input and can introduce ambiguity when applied to clinical scenarios. Instead, foundation models that use text prompts offer a more versatile and clinically relevant approach. Notably, current text-prompt models, such as the CLIP-Driven Universal Model, are limited to text prompts already encountered during training and struggle to process the complex and diverse scenarios of real-world clinical applications. Instead of fine-tuning models trained from natural imaging, we propose OpenVocabCT, a vision-language model pretrained on large-scale 3D CT images for universal text-driven segmentation. Using the large-scale CT-RATE dataset, we decompose the diagnostic reports into fine-grained, organ-level descriptions using large language models for multi-granular contrastive learning. We evaluate our OpenVocabCT on downstream segmentation tasks across nine public datasets for organ and tumor segmentation, demonstrating the superior performance of our model compared to existing methods. All code, datasets, and models will be publicly released at https://github.com/ricklisz/OpenVocabCT.
Problem

Research questions and friction points this paper is trying to address.

Improves CT image segmentation using text prompts.
Addresses limitations of current text-prompt models in clinical scenarios.
Proposes OpenVocabCT for universal text-driven segmentation in diverse datasets.
Innovation

Methods, ideas, or system contributions that make the work stand out.

OpenVocabCT: vision-language model for CT segmentation
Uses text prompts from diagnostic reports for learning
Pretrained on large-scale 3D CT images for versatility
🔎 Similar Papers
No similar papers found.
Y
Yuheng Li
Department of Biomedical Engineering, Georgia Institute of Technology, Emory University, Atlanta, GA 30332 USA
Yuxiang Lai
Yuxiang Lai
Ph.D. Student in Computer Science, Emory University
Computer VisionMedical Imaging
Maria Thor
Maria Thor
Assistant Attending Physicist, Memorial Sloan Kettering Cancer Center
Medicinsk fysikRadiobiologiBilder
D
Deborah Marshall
Department of Radiation Oncology, Icahn School of Medicine at Mount Sinai, New York, NY 10029
Z
Zachary Buchwald
Department of Radiation Oncology, Emory University School of Medicine, GA 30322 USA
D
David S. Yu
Department of Radiation Oncology, Emory University School of Medicine, GA 30322 USA
X
Xiaofeng Yang
Department of Biomedical Engineering, Georgia Institute of Technology, Emory University, Atlanta, GA 30332 USA; Department of Radiation Oncology, Emory University School of Medicine, GA 30322 USA