ChatEXAONEPath: An Expert-level Multimodal Large Language Model for Histopathology Using Whole Slide Images

📅 2025-04-17
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Current pathological multimodal large language models (LLMs) are constrained by the clinical information sparsity of publicly available datasets, hindering cancer-diagnostic-level semantic understanding. To address this, we propose the first expert-level multimodal LLM specifically designed for whole-slide images (WSIs). Our method introduces a WSI-level multimodal alignment framework, a retrieval-augmented WSI–report pairing generation pipeline, and an AI-driven pathological semantic evaluation protocol. Technically, it integrates WSI patch-level representation learning, cross-modal feature alignment, multimodal instruction tuning, and structured clinical report modeling. Evaluated on 1,134 TCGA WSI–report pairs, our model achieves a clinically acceptable diagnostic rate of 62.9%, significantly enhancing pan-cancer histomorphological comprehension. This work establishes a new paradigm for scalable, clinically deployable pathology AI.

Technology Category

Application Category

📝 Abstract
Recent studies have made significant progress in developing large language models (LLMs) in the medical domain, which can answer expert-level questions and demonstrate the potential to assist clinicians in real-world clinical scenarios. Studies have also witnessed the importance of integrating various modalities with the existing LLMs for a better understanding of complex clinical contexts, which are innately multi-faceted by nature. Although studies have demonstrated the ability of multimodal LLMs in histopathology to answer questions from given images, they lack in understanding of thorough clinical context due to the patch-level data with limited information from public datasets. Thus, developing WSI-level MLLMs is significant in terms of the scalability and applicability of MLLMs in histopathology. In this study, we introduce an expert-level MLLM for histopathology using WSIs, dubbed as ChatEXAONEPath. We present a retrieval-based data generation pipeline using 10,094 pairs of WSIs and histopathology reports from The Cancer Genome Atlas (TCGA). We also showcase an AI-based evaluation protocol for a comprehensive understanding of the medical context from given multimodal information and evaluate generated answers compared to the original histopathology reports. We demonstrate the ability of diagnosing the given histopathology images using ChatEXAONEPath with the acceptance rate of 62.9% from 1,134 pairs of WSIs and reports. Our proposed model can understand pan-cancer WSIs and clinical context from various cancer types. We argue that our proposed model has the potential to assist clinicians by comprehensively understanding complex morphology of WSIs for cancer diagnosis through the integration of multiple modalities.
Problem

Research questions and friction points this paper is trying to address.

Developing WSI-level MLLMs for histopathology scalability and applicability
Understanding pan-cancer WSIs and clinical context from multiple cancer types
Assisting clinicians in cancer diagnosis via multimodal WSI integration
Innovation

Methods, ideas, or system contributions that make the work stand out.

Retrieval-based data generation from TCGA
AI-based multimodal evaluation protocol
Pan-cancer WSI and clinical context understanding
🔎 Similar Papers
No similar papers found.