PolyPath: Adapting a Large Multimodal Model for Multi-slide Pathology Report Generation

📅 2025-02-14
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses the challenge of integrated pathological diagnosis across multiple high-magnification whole-slide images (WSIs). We propose the first cross-slide pathological report generation method leveraging a long-context large language multimodal model (Gemini 1.5 Flash). Our approach employs high-throughput tiling (10× magnification, 768×768 pixels), pathology-knowledge-aligned prompt engineering, and structured output generation to enable end-to-end semantic fusion analysis of up to five WSIs—approximately 40,000 image patches. It overcomes limitations of conventional single-image or local-region modeling by supporting clinical-grade multi-slide reasoning and diagnostic summarization. Expert blinded evaluation demonstrates that generated reports achieve clinical accuracy equivalent to or exceeding human-written reports in 68% of multi-slide cases (95% CI: [60%, 76%]). The method processes information density comparable to 11 hours of video per case, establishing a new benchmark for scalable, context-aware computational pathology.

Technology Category

Application Category

📝 Abstract
The interpretation of histopathology cases underlies many important diagnostic and treatment decisions in medicine. Notably, this process typically requires pathologists to integrate and summarize findings across multiple slides per case. Existing vision-language capabilities in computational pathology have so far been largely limited to small regions of interest, larger regions at low magnification, or single whole-slide images (WSIs). This limits interpretation of findings that span multiple high-magnification regions across multiple WSIs. By making use of Gemini 1.5 Flash, a large multimodal model (LMM) with a 1-million token context window, we demonstrate the ability to generate bottom-line diagnoses from up to 40,000 768x768 pixel image patches from multiple WSIs at 10X magnification. This is the equivalent of up to 11 hours of video at 1 fps. Expert pathologist evaluations demonstrate that the generated report text is clinically accurate and equivalent to or preferred over the original reporting for 68% (95% CI: [60%, 76%]) of multi-slide examples with up to 5 slides. While performance decreased for examples with 6 or more slides, this study demonstrates the promise of leveraging the long-context capabilities of modern LMMs for the uniquely challenging task of medical report generation where each case can contain thousands of image patches.
Problem

Research questions and friction points this paper is trying to address.

Generates multi-slide pathology reports
Leverages large multimodal model
Improves diagnostic accuracy across multiple slides
Innovation

Methods, ideas, or system contributions that make the work stand out.

Large Multimodal Model
Multi-slide Pathology Report
1-million token context
🔎 Similar Papers
No similar papers found.