🤖 AI Summary
Medical vision-language pretraining suffers from cross-modal alignment bias due to semantic density mismatch between low signal-to-noise-ratio medical images and high signal-to-noise-ratio clinical reports. To address this, we propose a high-semantic-density visual representation learning framework. First, disease-level contrastive learning is introduced to enhance fine-grained discriminative capability. Second, we innovatively construct an anatomy-level normality prior model: a VQ-VAE reconstructs the latent-space distribution of normal anatomical appearances, and distributional shift amplification is applied to enhance anomaly signals, thereby significantly improving lesion perception. Evaluated on multi-center chest and abdominal CT datasets, our method achieves state-of-the-art zero-shot diagnostic performance—attaining a mean AUC of 84.9% across 54 diseases spanning 15 organs—outperforming existing approaches by a substantial margin. Moreover, it demonstrates strong cross-domain generalizability.
📝 Abstract
Vision-language pre-training (VLP) has great potential for developing multifunctional and general medical diagnostic capabilities. However, aligning medical images with a low signal-to-noise ratio (SNR) to reports with a high SNR presents a semantic density gap, leading to visual alignment bias. In this paper, we propose boosting vision semantic density to improve alignment effectiveness. On one hand, we enhance visual semantics through disease-level vision contrastive learning, which strengthens the model's ability to differentiate between normal and abnormal samples for each anatomical structure. On the other hand, we introduce an anatomical normality modeling method to model the distribution of normal samples for each anatomy, leveraging VQ-VAE for reconstructing normal vision embeddings in the latent space. This process amplifies abnormal signals by leveraging distribution shifts in abnormal samples, enhancing the model's perception and discrimination of abnormal attributes. The enhanced visual representation effectively captures the diagnostic-relevant semantics, facilitating more efficient and accurate alignment with the diagnostic report. We conduct extensive experiments on two chest CT datasets, CT-RATE and Rad-ChestCT, and an abdominal CT dataset, MedVL-CT69K, and comprehensively evaluate the diagnosis performance across multiple tasks in the chest and abdominal CT scenarios, achieving state-of-the-art zero-shot performance. Notably, our method achieved an average AUC of 84.9% across 54 diseases in 15 organs, significantly surpassing existing methods. Additionally, we demonstrate the superior transfer learning capabilities of our pre-trained model. Code is available at https://github.com/alibaba-damo-academy/ViSD-Boost.