🤖 AI Summary
Current medical image–language pretraining models struggle to capture the inherent variability and diagnostic uncertainty prevalent in clinical data, resulting in limited generalizability. To address this, we propose Distributed Masked Vision–Language Pretraining (D-MedVLP), the first multimodal pretraining framework explicitly incorporating uncertainty-aware learning. D-MedVLP leverages a large language model to automatically generate structured chest X-ray reports—comprising disease definitions, imaging findings, observations, and conclusions—and constructs intra-modal and inter-modal uncertainty distribution modeling objectives grounded in these reports. By jointly optimizing masked image–text reconstruction, the framework enables synergistic learning of clinical semantics and image ambiguity. Evaluated on five downstream tasks, D-MedVLP achieves state-of-the-art performance, with significant improvements in model robustness, interpretability, and clinical applicability.
📝 Abstract
Medical image-language pre-training aims to align medical images with clinically relevant text to improve model performance on various downstream tasks. However, existing models often struggle with the variability and ambiguity inherent in medical data, limiting their ability to capture nuanced clinical information and uncertainty. This work introduces an uncertainty-aware medical image-text pre-training model that enhances generalization capabilities in medical image analysis. Building on previous methods and focusing on Chest X-Rays, our approach utilizes structured text reports generated by a large language model (LLM) to augment image data with clinically relevant context. These reports begin with a definition of the disease, followed by the `appearance' section to highlight critical regions of interest, and finally `observations' and `verdicts' that ground model predictions in clinical semantics. By modeling both inter- and intra-modal uncertainty, our framework captures the inherent ambiguity in medical images and text, yielding improved representations and performance on downstream tasks. Our model demonstrates significant advances in medical image-text pre-training, obtaining state-of-the-art performance on multiple downstream tasks.