🤖 AI Summary
This work addresses the few-shot, weakly supervised classification of whole-slide images (WSIs) in rare disease scenarios, where training data is severely limited. Existing methods suffer from three key limitations: neglect of vision-language model (VLM) textual priors, insufficient multi-scale contextual modeling, and absence of effective instance aggregation mechanisms. To overcome these, we propose a hierarchical prompt-tuning framework guided by multi-scale pathological vision-language priors: (1) frozen VLMs (e.g., CLIP, FLAVA) generate multi-scale semantic priors; (2) a graph-structured contextual prompting module explicitly models local–global relationships; and (3) a non-parametric cross-guided instance aggregation mechanism enhances discriminability. Our method achieves significant improvements over state-of-the-art approaches across five WSI benchmarks and three downstream tasks. The code is publicly available. The framework offers strong interpretability and high clinical adaptability, enabling robust, data-efficient rare-disease diagnosis.
📝 Abstract
Multiple instance learning (MIL) has become a standard paradigm for the weakly supervised classification of whole slide images (WSIs). However, this paradigm relies on using a large number of labeled WSIs for training. The lack of training data and the presence of rare diseases pose significant challenges for these methods. Prompt tuning combined with pre-trained Vision-Language models (VLMs) is an effective solution to the Few-shot Weakly Supervised WSI Classification (FSWC) task. Nevertheless, applying prompt tuning methods designed for natural images to WSIs presents three significant challenges: 1) These methods fail to fully leverage the prior knowledge from the VLM's text modality; 2) They overlook the essential multi-scale and contextual information in WSIs, leading to suboptimal results; and 3) They lack exploration of instance aggregation methods. To address these problems, we propose a Multi-Scale and Context-focused Prompt Tuning (MSCPT) method for FSWC task. Specifically, MSCPT employs the frozen large language model to generate pathological visual language prior knowledge at multiple scales, guiding hierarchical prompt tuning. Additionally, we design a graph prompt tuning module to learn essential contextual information within WSI, and finally, a non-parametric cross-guided instance aggregation module has been introduced to derive the WSI-level features. Extensive experiments, visualizations, and interpretability analyses were conducted on five datasets and three downstream tasks using three VLMs, demonstrating the strong performance of our MSCPT. All codes have been made publicly accessible at https://github.com/Hanminghao/MSCPT.