🤖 AI Summary
Current CT foundation models rely heavily on large-scale image-text paired data and require costly backbone fine-tuning for downstream adaptation, hindering clinical deployment. This work proposes VoxelFM, a language-free foundation model leveraging 3D self-supervised learning within a DINO self-distillation framework. By freezing the pretrained backbone and attaching lightweight task-specific probes, VoxelFM enables efficient transfer across diverse clinical applications. It matches or surpasses four leading CT foundation models across seven tasks—including classification, regression, survival analysis, retrieval, localization, segmentation, and report generation—and even outperforms explicit language-aligned models in report generation. These results demonstrate that robust and generalizable 3D semantic representations can be effectively learned without language supervision.
📝 Abstract
There is substantial interest in developing artificial intelligence systems to support radiologists across tasks ranging from segmentation to report generation. Existing computed tomography (CT) foundation models have largely focused on building generalist vision-language systems capable of tasks such as question answering and report generation. However, training reliable vision-language systems requires paired image-text data at a scale that remains unavailable in CT. Moreover, adapting the underlying visual representations to downstream tasks typically requires partial or full backbone fine-tuning, a computationally demanding process inaccessible to many research groups. Instead, foundation models should prioritise learning robust visual representations that enable efficient transfer to new tasks with minimal labelled data and without backbone fine-tuning. We present VoxelFM, a 3D CT foundation model trained with self-distillation using the DINO framework, which learns semantically rich features without language supervision. We evaluated VoxelFM across seven categories of clinically relevant downstream tasks using frozen backbone representations with lightweight probes: classification, regression, survival analysis, instance retrieval, localisation, segmentation, and report generation. VoxelFM matched or outperformed four existing CT foundation models across all task categories. Despite receiving no language supervision during pre-training, VoxelFM surpassed models explicitly trained with language-alignment objectives, including on report generation. Our results indicate that current CT foundation models perform significantly better as feature extractors for lightweight probes rather than as vision encoders for vision-language models. Model weights and training code are publicly available.