🤖 AI Summary
In radiological diagnosis, critical challenges—including missed diagnoses, inattentional blindness, and report inconsistency—are especially pronounced in 3D CT due to inaccurate local lesion detection, insufficient global contextual understanding, and highly heterogeneous reporting language. To address these issues, we propose a multi-scale semantic-enhanced 3D vision-language pretraining framework featuring a novel local-global joint alignment mechanism. Our approach integrates a medical semantic matching library with large language model–based report rewriting to enable precise lesion localization, volumetric reasoning, and generation of semantically consistent natural language reports. The method comprises 3D cross-modal pretraining, fully convolutional contextual learning, and multi-granularity image-text alignment. It achieves state-of-the-art performance on zero-shot disease classification, report retrieval, and medical visual question answering. Moreover, it transfers effectively to organ segmentation and prognosis prediction, significantly improving diagnostic consistency and clinical accuracy.
📝 Abstract
Radiologic diagnostic errors-under-reading errors, inattentional blindness, and communication failures-remain prevalent in clinical practice. These issues often stem from missed localized abnormalities, limited global context, and variability in report language. These challenges are amplified in 3D imaging, where clinicians must examine hundreds of slices per scan. Addressing them requires systems with precise localized detection, global volume-level reasoning, and semantically consistent natural language reporting. However, existing 3D vision-language models are unable to meet all three needs jointly, lacking local-global understanding for spatial reasoning and struggling with the variability and noise of uncurated radiology reports. We present MedVista3D, a multi-scale semantic-enriched vision-language pretraining framework for 3D CT analysis. To enable joint disease detection and holistic interpretation, MedVista3D performs local and global image-text alignment for fine-grained representation learning within full-volume context. To address report variability, we apply language model rewrites and introduce a Radiology Semantic Matching Bank for semantics-aware alignment. MedVista3D achieves state-of-the-art performance on zero-shot disease classification, report retrieval, and medical visual question answering, while transferring well to organ segmentation and prognosis prediction. Code and datasets will be released.