🤖 AI Summary
To address the weak spatial context and rapid decay of long-range dependencies in gigapixel whole-slide image (WSI) sequence modeling, this paper proposes a novel Mamba-based architecture tailored for WSI analysis. The method introduces three key innovations: (1) an overlapping scanning strategy to enhance local continuity; (2) a selective stripe positional encoder (S2PE) for efficient, spatially aware token embedding; and (3) a context token selection (CTS) mechanism that dynamically focuses on discriminative regions. Compatible with mainstream feature extractors—including ResNet-50, PLIP, and CONCH—the framework supports instance-level spatial reordering and supervision-guided memory enhancement. Evaluated across 20 pathology benchmarks spanning diagnostic classification, molecular prediction, and survival analysis, it achieves state-of-the-art performance while demonstrating strong cross-model robustness.
📝 Abstract
Whole-slide images (WSIs) are an important data modality in computational pathology, yet their gigapixel resolution and lack of fine-grained annotations challenge conventional deep learning models. Multiple instance learning (MIL) offers a solution by treating each WSI as a bag of patch-level instances, but effectively modeling ultra-long sequences with rich spatial context remains difficult. Recently, Mamba has emerged as a promising alternative for long sequence learning, scaling linearly to thousands of tokens. However, despite its efficiency, it still suffers from limited spatial context modeling and memory decay, constraining its effectiveness to WSI analysis. To address these limitations, we propose MambaMIL+, a new MIL framework that explicitly integrates spatial context while maintaining long-range dependency modeling without memory forgetting. Specifically, MambaMIL+ introduces 1) overlapping scanning, which restructures the patch sequence to embed spatial continuity and instance correlations; 2) a selective stripe position encoder (S2PE) that encodes positional information while mitigating the biases of fixed scanning orders; and 3) a contextual token selection (CTS) mechanism, which leverages supervisory knowledge to dynamically enlarge the contextual memory for stable long-range modeling. Extensive experiments on 20 benchmarks across diagnostic classification, molecular prediction, and survival analysis demonstrate that MambaMIL+ consistently achieves state-of-the-art performance under three feature extractors (ResNet-50, PLIP, and CONCH), highlighting its effectiveness and robustness for large-scale computational pathology