π€ AI Summary
To address the high computational cost and difficulty in modeling task- and slide-specific heterogeneity in whole-slide image (WSI) analysis using multiple instance learning (MIL), this paper proposes PromptMILβa vision Transformer framework leveraging learnable prompt tokens. PromptMIL unifies clustering and prediction within the prompt token space, enabling lightweight and interpretable feature aggregation via projection-based clustering and prototype pooling. It further introduces a dynamic token merging mechanism to adaptively integrate patch-level diversity. Evaluated on eight public WSI datasets, PromptMIL achieves state-of-the-art performance in both classification and survival analysis tasks, significantly outperforming existing MIL methods. Ablation studies confirm the effectiveness and robustness of prompt-guided clustering and task-coordinated design.
π Abstract
Multiple Instance Learning (MIL) has advanced WSI analysis but struggles with the complexity and heterogeneity of WSIs. Existing MIL methods face challenges in aggregating diverse patch information into robust WSI representations. While ViTs and clustering-based approaches show promise, they are computationally intensive and fail to capture task-specific and slide-specific variability. To address these limitations, we propose PTCMIL, a novel Prompt Token Clustering-based ViT for MIL aggregation. By introducing learnable prompt tokens into the ViT backbone, PTCMIL unifies clustering and prediction tasks in an end-to-end manner. It dynamically aligns clustering with downstream tasks, using projection-based clustering tailored to each WSI, reducing complexity while preserving patch heterogeneity. Through token merging and prototype-based pooling, PTCMIL efficiently captures task-relevant patterns. Extensive experiments on eight datasets demonstrate its superior performance in classification and survival analysis tasks, outperforming state-of-the-art methods. Systematic ablation studies confirm its robustness and strong interpretability. The code is released at https://github.com/ubc-tea/PTCMIL.