SCALE-VLP: Soft-Weighted Contrastive Volumetric Vision-Language Pre-training with Spatial-Knowledge Semantics

📅 2025-11-04
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing vision-language models (VLMs) predominantly rely on 2D slice-wise processing and binary supervision, limiting their ability to capture spatial continuity and clinically meaningful semantic structures inherent in 3D medical imaging (e.g., CT). To address this, we propose a soft-weighted contrastive learning framework that jointly models anatomical fidelity and clinical semantics via voxel-level spatially aware encoding and radiology-ontology-guided semantic weighting. By injecting structured domain knowledge, our method generates hierarchical supervision signals, enhancing cross-modal representation learning under limited annotation budgets. Experiments demonstrate substantial improvements: up to 4.3× higher top-1 retrieval accuracy in CT report retrieval, +10 percentage points in abnormality classification accuracy, ROUGE-L of 0.44, and BERT-F1 of 0.89 in report generation. Moreover, the model exhibits strong zero-shot cross-domain transfer capability.

Technology Category

Application Category

📝 Abstract
Vision-language models (VLMs) have demonstrated strong cross-modal capabilities, yet most work remains limited to 2D data and assumes binary supervision (i.e., positive vs. negative pairs), overlooking the continuous and structured dependencies present in volumetric data such as CT. Existing approaches often treat volumetric scans as independent 2D slices, compromising spatial coherence and underutilizing rich clinical semantics. We propose SCALE-VLP, a soft-weighted contrastive vision-language pre-training framework that integrates (i) volumetric spatial semantics to preserve anatomical structure and (ii) domain-aware, knowledge-infused semantics (e.g., radiological ontologies) to guide alignment. This yields structurally consistent and semantically grounded representations under limited supervision, demonstrating strong cross-task transferability (retrieval, report generation, and classification), and cross-domain generalizability with consistent gains without further fine-tuning. In particular, compared to the previous state of the art, SCALE-VLP achieves up to 4.3x higher top-1 CT-report retrieval, improves abnormality classification by 10 points, and reaches ROUGE-L 0.44 and BERT-F1 0.89 for report generation. Further, in zero-shot evaluation on an out-of-domain external dataset, we observe consistent gains, indicating the cross-task and cross-domain generalization ability of SCALE-VLP.
Problem

Research questions and friction points this paper is trying to address.

Addresses volumetric vision-language pre-training for 3D medical data
Integrates spatial and clinical semantics to preserve anatomical structure
Enhances cross-task and cross-domain generalization with limited supervision
Innovation

Methods, ideas, or system contributions that make the work stand out.

Soft-weighted contrastive learning for volumetric vision-language pre-training
Integrates spatial semantics to preserve anatomical structure
Incorporates domain-aware knowledge-infused semantics for alignment
🔎 Similar Papers
No similar papers found.
A
Ailar Mahdizadeh
University of British Columbia
P
Puria Azadi Moghadam
University of British Columbia
Xiangteng He
Xiangteng He
University of British Columbia
Fine-grained Visual AnalysisVision-Language ModelsComputer Vision
S
S. Mirabbasi
University of British Columbia
P
P. Nasiopoulos
University of British Columbia
Leonid Sigal
Leonid Sigal
Professor, University of British Columbia
Computer VisionMachine Learning