MedVista3D: Vision-Language Modeling for Reducing Diagnostic Errors in 3D CT Disease Detection, Understanding and Reporting

📅 2025-09-03
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
In radiological diagnosis, critical challenges—including missed diagnoses, inattentional blindness, and report inconsistency—are especially pronounced in 3D CT due to inaccurate local lesion detection, insufficient global contextual understanding, and highly heterogeneous reporting language. To address these issues, we propose a multi-scale semantic-enhanced 3D vision-language pretraining framework featuring a novel local-global joint alignment mechanism. Our approach integrates a medical semantic matching library with large language model–based report rewriting to enable precise lesion localization, volumetric reasoning, and generation of semantically consistent natural language reports. The method comprises 3D cross-modal pretraining, fully convolutional contextual learning, and multi-granularity image-text alignment. It achieves state-of-the-art performance on zero-shot disease classification, report retrieval, and medical visual question answering. Moreover, it transfers effectively to organ segmentation and prognosis prediction, significantly improving diagnostic consistency and clinical accuracy.

Technology Category

Application Category

📝 Abstract
Radiologic diagnostic errors-under-reading errors, inattentional blindness, and communication failures-remain prevalent in clinical practice. These issues often stem from missed localized abnormalities, limited global context, and variability in report language. These challenges are amplified in 3D imaging, where clinicians must examine hundreds of slices per scan. Addressing them requires systems with precise localized detection, global volume-level reasoning, and semantically consistent natural language reporting. However, existing 3D vision-language models are unable to meet all three needs jointly, lacking local-global understanding for spatial reasoning and struggling with the variability and noise of uncurated radiology reports. We present MedVista3D, a multi-scale semantic-enriched vision-language pretraining framework for 3D CT analysis. To enable joint disease detection and holistic interpretation, MedVista3D performs local and global image-text alignment for fine-grained representation learning within full-volume context. To address report variability, we apply language model rewrites and introduce a Radiology Semantic Matching Bank for semantics-aware alignment. MedVista3D achieves state-of-the-art performance on zero-shot disease classification, report retrieval, and medical visual question answering, while transferring well to organ segmentation and prognosis prediction. Code and datasets will be released.
Problem

Research questions and friction points this paper is trying to address.

Reducing diagnostic errors in 3D CT imaging analysis
Addressing missed abnormalities and limited global context
Improving semantic consistency in radiology report generation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Multi-scale vision-language pretraining for 3D CT analysis
Local and global image-text alignment for joint detection
Semantic matching bank with language rewrites for report consistency
🔎 Similar Papers
No similar papers found.
Y
Yuheng Li
Department of Biomedical Engineering, Georgia Institute of Technology, Atlanta, GA
Y
Yenho Chen
Department of Machine Learning, Georgia Institute of Technology, Atlanta, GA
Yuxiang Lai
Yuxiang Lai
Ph.D. Student in Computer Science, Emory University
Computer VisionMedical Imaging
Jike Zhong
Jike Zhong
University of Southern California
Computer VisionMachine Learning
V
Vanessa Wildman
Department of Radiation Oncology, Emory University School of Medicine, Atlanta, USA, Atlanta, GA
X
Xiaofeng Yang
Department of Radiation Oncology, Emory University School of Medicine, Atlanta, USA, Atlanta, GA