🤖 AI Summary
This work addresses the challenge of inconsistent resolution, slice thickness, and number of slices in medical CT images caused by variations across imaging devices. To tackle this, the authors propose a 3D vision–language pretraining framework that supports arbitrary input volumes. The method models CT volumes as sequences of 3D patches and introduces rotary positional encoding—adapted for the unconstrained z-axis—for the first time in medical volumetric language pretraining. By aligning local organ-level text descriptions with corresponding 3D patches, the model enables fine-grained, self-supervised image–text alignment at the patch level. Extensive experiments demonstrate that the proposed model significantly outperforms existing approaches on downstream tasks including zero-shot anomaly detection, organ classification, segmentation, and retrieval, highlighting its strong generalization capability to variable-sized CT volumes and its superior cross-modal representation learning.
📝 Abstract
Large-scale, volumetric medical imaging datasets typically aggregate scans from different vendors and devices, resulting in highly variable resolution, slice thicknesses, and numbers of slices per study. Consequently, training representation models usually requires cropping or interpolating along the z-axis to obtain fixed-size blocks, which inevitably causes information loss. We propose a new training approach to overcome this limitation. Instead of absolute position embeddings, we interpret volumes as sequences of 3D chunks and adopt Rotary Position Embeddings, allowing us to treat the z-axis as an unconstrained temporal dimensions. Building on this idea, we introduce a new vision-language model: SigVLP. In SigVLP, we implement Rotary Position Embedding as the positional encoding method, which is applied directly within the attention operation, generating input-conditioned sine and cosine weights on the fly. This design ensures consistent alignment between query and key projections and adapts to any input sizes. To allow for variable input size during training, we sample Computed Tomography volumes in chunks and pair them with localized organ-wise textual observations. Compared to using entire reports for conditioning, chunkwise alignment provides finer-grained supervision, enabling the model to establish stronger correlations between the text and volume representations, thereby improving the precision of text-to-volume alignment. Our models are trained with the Muon optimizer and evaluated on a diverse set of downstream tasks, including zero-shot abnormality and organ classification, segmentation, and retrieval tasks.