Scaling Self-Supervised and Cross-Modal Pretraining for Volumetric CT Transformers

📅 2025-11-21
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address three key challenges in volumetric CT—extreme token scale, geometric anisotropy, and weak/noisy clinical supervision—this paper introduces SPECTRE, the first fully Transformer-based foundation model for 3D volume CT. Methodologically, it features: (1) a hybrid 3D Vision Transformer architecture integrating local windowed and global sparse attention for efficient long-range modeling; (2) a joint self-supervised pretraining paradigm combining DINO-based distillation on public CT data with cross-modal vision–language alignment guided by SigLIP; and (3) unsupervised learning of robust, generalizable CT representations without manual annotations. Evaluated across multiple zero-shot and fine-tuning benchmarks, SPECTRE consistently outperforms existing CT foundation models, demonstrating superior scalability, performance, and open reproducibility.

Technology Category

Application Category

📝 Abstract
We introduce SPECTRE, a fully transformer-based foundation model for volumetric computed tomography (CT). Our Self-Supervised & Cross-Modal Pretraining for CT Representation Extraction (SPECTRE) approach utilizes scalable 3D Vision Transformer architectures and modern self-supervised and vision-language pretraining strategies to learn general-purpose CT representations. Volumetric CT poses unique challenges, such as extreme token scaling, geometric anisotropy, and weak or noisy clinical supervision, that make standard transformer and contrastive learning recipes ineffective out of the box. The framework jointly optimizes a local transformer for high-resolution volumetric feature extraction and a global transformer for whole-scan context modeling, making large-scale 3D attention computationally tractable. Notably, SPECTRE is trained exclusively on openly available CT datasets, demonstrating that high-performing, generalizable representations can be achieved without relying on private data. Pretraining combines DINO-style self-distillation with SigLIP-based vision-language alignment using paired radiology reports, yielding features that are both geometrically consistent and clinically meaningful. Across multiple CT benchmarks, SPECTRE consistently outperforms prior CT foundation models in both zero-shot and fine-tuned settings, establishing SPECTRE as a scalable, open, and fully transformer-based foundation model for 3D medical imaging.
Problem

Research questions and friction points this paper is trying to address.

Addresses volumetric CT challenges like token scaling and geometric anisotropy
Develops scalable 3D transformer architecture for medical imaging analysis
Creates open foundation model using self-supervised and cross-modal pretraining
Innovation

Methods, ideas, or system contributions that make the work stand out.

Transformer-based foundation model for volumetric CT
Joint local and global transformers for scalable 3D attention
Combines self-distillation with vision-language alignment pretraining
🔎 Similar Papers
No similar papers found.