🤖 AI Summary
Prior work predominantly applies vision-language models (VLMs) to patch-level analysis of histopathological images, leaving their potential for whole-slide image (WSI) modeling largely unexplored. Method: This paper introduces the first VLM-based framework for WSI representation learning, proposing an interpretable, end-to-end paradigm that integrates cross-modal embedding alignment, tile-wise aggregation, text-guided attention visualization, and pathology semantic decoding to directly map WSIs to human-readable diagnostic annotations. Contribution/Results: By transcending conventional patch-level modeling constraints, our approach achieves state-of-the-art performance across three benchmark WSI datasets—significantly outperforming mainstream visual feature extractors in both classification and survival prediction tasks—while generating clinically verifiable, natural-language diagnostic rationales grounded in histopathological semantics.
📝 Abstract
Recent advances in vision-language models (VLMs) have shown remarkable potential in bridging visual and textual modalities. In computational pathology, domain-specific VLMs, which are pre-trained on extensive histopathology image-text datasets, have succeeded in various downstream tasks. However, existing research has primarily focused on the pre-training process and direct applications of VLMs on the patch level, leaving their great potential for whole slide image (WSI) applications unexplored. In this study, we hypothesize that pre-trained VLMs inherently capture informative and interpretable WSI representations through quantitative feature extraction. To validate this hypothesis, we introduce Vision and Language Embeddings for Explainable WSI Representation (VLEER), a novel method designed to leverage VLMs for WSI representation. We systematically evaluate VLEER on three pathological WSI datasets, proving its better performance in WSI analysis compared to conventional vision features. More importantly, VLEER offers the unique advantage of interpretability, enabling direct human-readable insights into the results by leveraging the textual modality for detailed pathology annotations, providing clear reasoning for WSI-level pathology downstream tasks.