🤖 AI Summary
This work addresses a key limitation in existing multimodal large language models for computational pathology—namely, their reliance on compressing whole-slide images into a single embedding, which hinders fine-grained localization and cross-scale reasoning. To overcome this, the authors propose a hierarchical multimodal large language model that introduces, for the first time, a four-level visual–language alignment mechanism spanning cells, image patches, tissue regions, and the entire slide, thereby emulating the multiscale diagnostic workflow of pathologists. The approach integrates multiscale feature extraction, a Cell-Cell Attention Fusion (CCAF) Transformer, hierarchical contrastive learning, cross-scale consistency loss, and instruction tuning. Evaluated across six tasks and 13 whole-slide image benchmarks, the model achieves new state-of-the-art performance, significantly enhancing both diagnostic accuracy and interpretability.
📝 Abstract
Whole Slide Images (WSIs) exhibit hierarchical structure, where diagnostic information emerges from cellular morphology, regional tissue organization, and global context. Existing Computational Pathology (CPath) Multimodal Large Language Models (MLLMs) typically compress an entire WSI into a single embedding, which hinders fine-grained grounding and ignores how pathologists synthesize evidence across different scales. We introduce \textbf{MLLM-HWSI}, a Hierarchical WSI-level MLLM that aligns visual features with pathology language at four distinct scales, cell as word, patch as phrase, region as sentence, and WSI as paragraph to support interpretable evidence-grounded reasoning. MLLM-HWSI decomposes each WSI into multi-scale embeddings with scale-specific projectors and jointly enforces (i) a hierarchical contrastive objective and (ii) a cross-scale consistency loss, preserving semantic coherence from cells to the WSI. We compute diagnostically relevant patches and aggregate segmented cell embeddings into a compact cellular token per-patch using a lightweight \textit{Cell-Cell Attention Fusion (CCAF)} transformer. The projected multi-scale tokens are fused with text tokens and fed to an instruction-tuned LLM for open-ended reasoning, VQA, report, and caption generation tasks. Trained in three stages, MLLM-HWSI achieves new SOTA results on 13 WSI-level benchmarks across six CPath tasks. By aligning language with multi-scale visual evidence, MLLM-HWSI provides accurate, interpretable outputs that mirror diagnostic workflows and advance holistic WSI understanding. Code is available at: \href{https://github.com/BasitAlawode/HWSI-MLLM}{GitHub}.