MedTrinity-25M: A Large-scale Multimodal Dataset with Multigranular Annotations for Medicine

📅 2024-08-06
🏛️ arXiv.org
📈 Citations: 16
Influential: 2
📄 PDF
🤖 AI Summary
Medical multimodal datasets commonly suffer from scarcity of image–text pairs and insufficient multi-granularity annotations, hindering progress in medical captioning, report generation, and vision tasks. To address this, we introduce the first large-scale medical multimodal dataset encompassing 10 imaging modalities and over 25 million images, with dual-level annotations—global (modality-/organ-level) and local (ROI-, texture-, and region-association-level)—covering 65+ diseases. We propose an automated, human-annotation-free multi-granularity labeling pipeline integrating domain-expert models (for ROI localization), medical knowledge base retrieval, and retrieval-augmented multimodal large language models (MLLMs) to generate image–ROI–description triplets. Leveraging the LLaVA architecture, we conduct both pretraining and fine-tuning on these triplets. Our method yields LLaVA-Tri, which achieves state-of-the-art performance on VQA-RAD, SLAKE, and PathVQA, significantly advancing medical multimodal understanding, radiology report generation, classification, and segmentation.

Technology Category

Application Category

📝 Abstract
This paper introduces MedTrinity-25M, a comprehensive, large-scale multimodal dataset for medicine, covering over 25 million images across 10 modalities with multigranular annotations for more than 65 diseases. These multigranular annotations encompass both global information, such as modality and organ detection, and local information like ROI analysis, lesion texture, and region-wise correlations. Unlike the existing multimodal datasets, which are limited by the availability of image-text pairs, we have developed the first automated pipeline that scales up multimodal data by generating multigranular visual and textual annotations in the form of image-ROI-description triplets without the need for any paired text descriptions. Specifically, data from over 30 different sources have been collected, preprocessed, and grounded using domain-specific expert models to identify ROIs related to abnormal regions. We then build a comprehensive knowledge base and prompt multimodal large language models to perform retrieval-augmented generation with the identified ROIs as guidance, resulting in multigranular textual descriptions. Compared to existing datasets, MedTrinity-25M provides the most enriched annotations, supporting a comprehensive range of multimodal tasks such as captioning and report generation, as well as vision-centric tasks like classification and segmentation. We propose LLaVA-Tri by pretraining LLaVA on MedTrinity-25M, achieving state-of-the-art performance on VQA-RAD, SLAKE, and PathVQA, surpassing representative SOTA multimodal large language models. Furthermore, MedTrinity-25M can also be utilized to support large-scale pre-training of multimodal medical AI models, contributing to the development of future foundation models in the medical domain. We will make our dataset available.
Problem

Research questions and friction points this paper is trying to address.

Creating a large-scale multimodal medical dataset with multigranular annotations
Automating generation of image-ROI-description triplets without paired text
Enhancing medical AI models for tasks like captioning and segmentation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Automated pipeline for multimodal data scaling
Multigranular annotations via retrieval-augmented generation
Pretraining LLaVA on enriched medical dataset
🔎 Similar Papers
No similar papers found.