Learning to Read Where to Look: Disease-Aware Vision-Language Pretraining for 3D CT

📅 2026-03-02
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing 3D CT vision–language models are limited by the scarcity of publicly available data and coarse-grained supervision, hindering fine-grained lesion localization and precise image–text alignment. This work introduces, for the first time, a large-scale in-house dataset of 98k CT–report pairs and automatically mines 262k text fragment–slice correspondences, formulating a novel “intra-scan fragment localization” task. By integrating SigLIP-style contrastive learning, prompt engineering, and multi-task joint training, the proposed framework unifies global semantics and local anatomical details within a shared embedding space, simultaneously optimizing retrieval, classification, and axial localization. On the CT-RATE benchmark, the model achieves state-of-the-art performance with a text-to-image retrieval R@10 of 31.5, an AUC of 83.8 for disease classification, and a mean absolute error of 36.3 mm in fragment localization—significantly outperforming existing baselines.

Technology Category

Application Category

📝 Abstract
Recent 3D CT vision-language models align volumes with reports via contrastive pretraining, but typically rely on limited public data and provide only coarse global supervision. We train a 3D CT vision-language model on 98k report-volume pairs (50k patients) collected at a single hospital, combined with public datasets, using SigLIP-style contrastive pretraining together with prompt-based disease supervision in the shared vision-text embedding space. On CT-RATE, our model achieves state-of-the-art text-to-image retrieval (R@10 31.5 vs. 22.2) and competitive disease classification (AUC 83.8 vs. 83.8), with consistent results on Rad-ChestCT (AUC 77.0 vs. 77.3). We further observe that radiologists routinely reference specific images within their reports (e.g., ``series X, image Y''), linking textual descriptions to precise axial locations. We automatically mine 262k such snippet-slice pairs and introduce the task of intra-scan snippet localization -- predicting the axial depth referred to by a text snippet -- reducing mean absolute error to 36.3 mm at 12 mm feature resolution, compared with 67.0 mm for the best baseline. Adding this localization objective leaves retrieval and classification broadly unchanged within confidence bounds, yielding a single unified model for retrieval, classification, and intra-scan grounding.
Problem

Research questions and friction points this paper is trying to address.

3D CT
vision-language pretraining
disease-aware
intra-scan localization
fine-grained grounding
Innovation

Methods, ideas, or system contributions that make the work stand out.

disease-aware vision-language pretraining
intra-scan snippet localization
3D CT grounding
prompt-based disease supervision
SigLIP-style contrastive learning
🔎 Similar Papers
No similar papers found.