🤖 AI Summary
Current single-resolution vision-language models (VLMs) in computational pathology struggle with fine-grained tasks such as cancer subtyping, tissue phenotyping, and survival prediction. To address this, we propose a multi-resolution pathology-language joint pretraining framework. Our approach introduces the first cross-resolution vision–text alignment mechanism, leveraging multi-scale whole-slide image (WSI) patch sampling, pathology-specific text generation, and joint optimization via contrastive learning and cross-modal reconstruction. Pretrained on 34 million image–text pairs from The Cancer Genome Atlas (TCGA), the framework employs a novel loss function that significantly enhances discriminability and generalizability of multi-scale features. After fine-tuning, our model achieves state-of-the-art performance across multiple pathology benchmarks, demonstrating substantial improvements in multi-task accuracy and robustness to resolution variations.
📝 Abstract
In Computational Pathology (CPath), the introduction of Vision-Language Models (VLMs) has opened new avenues for research, focusing primarily on aligning image-text pairs at a single magnification level. However, this approach might not be sufficient for tasks like cancer subtype classification, tissue phenotyping, and survival analysis due to the limited level of detail that a single-resolution image can provide. Addressing this, we propose a novel multi-resolution paradigm leveraging Whole Slide Images (WSIs) to extract histology patches at multiple resolutions and generate corresponding textual descriptions through advanced CPath VLM. We introduce visual-textual alignment at multiple resolutions as well as cross-resolution alignment to establish more effective text-guided visual representations. Cross-resolution alignment using a multimodal encoder enhances the model's ability to capture context from multiple resolutions in histology images. Our model aims to capture a broader range of information, supported by novel loss functions, enriches feature representation, improves discriminative ability, and enhances generalization across different resolutions. Pre-trained on a comprehensive TCGA dataset with 34 million image-language pairs at various resolutions, our fine-tuned model outperforms state-of-the-art (SOTA) counterparts across multiple datasets and tasks, demonstrating its effectiveness in CPath. The code is available on GitHub at: https://github.com/BasitAlawode/MR-PLIP