🤖 AI Summary
In chest CT scans, pathological lesions exhibit spatial sparsity, and fine-grained semantic associations between report sentences and image subregions are implicit and non-bijective—hindering effective multimodal representation learning. To address this, we propose SimCroP, the first framework integrating similarity-driven alignment with cross-granularity fusion. SimCroP constructs joint vision-language representations via multimodal masked modeling, enables adaptive cross-granularity matching through sentence–region similarity scoring, and fuses local lesion structures with global anatomical context to enhance multi-scale pathological modeling. Pretrained on a large-scale CT–report paired dataset, SimCroP consistently outperforms state-of-the-art self-supervised and vision-language pretraining methods across five public benchmarks, achieving superior performance on both image classification and segmentation tasks. Its gains demonstrate substantial improvements in downstream clinical understanding and localization capabilities.
📝 Abstract
Medical vision-language pre-training shows great potential in learning representative features from massive paired radiographs and reports. However, in computed tomography (CT) scans, the distribution of lesions which contain intricate structures is characterized by spatial sparsity. Besides, the complex and implicit relationships between different pathological descriptions in each sentence of the report and their corresponding sub-regions in radiographs pose additional challenges. In this paper, we propose a Similarity-Driven Cross-Granularity Pre-training (SimCroP) framework on chest CTs, which combines similarity-driven alignment and cross-granularity fusion to improve radiograph interpretation. We first leverage multi-modal masked modeling to optimize the encoder for understanding precise low-level semantics from radiographs. Then, similarity-driven alignment is designed to pre-train the encoder to adaptively select and align the correct patches corresponding to each sentence in reports. The cross-granularity fusion module integrates multimodal information across instance level and word-patch level, which helps the model better capture key pathology structures in sparse radiographs, resulting in improved performance for multi-scale downstream tasks. SimCroP is pre-trained on a large-scale paired CT-reports dataset and validated on image classification and segmentation tasks across five public datasets. Experimental results demonstrate that SimCroP outperforms both cutting-edge medical self-supervised learning methods and medical vision-language pre-training methods. Codes and models are available at https://github.com/ToniChopp/SimCroP.