🤖 AI Summary
To address the challenge of sparse positive instances in gigapixel histopathological images—which causes conventional multiple instance learning (MIL) models to overlook hard examples and yield inaccurate boundary modeling—this paper proposes Mask-based Hard Instance Mining (MHIM). MHIM employs a momentum teacher–student framework that dynamically masks salient instances via class-aware instance probabilities and large-scale random masking, implicitly steering attention toward hard-to-classify regions. It further incorporates a global recovery network and consistency regularization to ensure stable and diverse hard instance discovery. Technically, MHIM integrates a Siamese architecture, exponential moving average parameter updates, and end-to-end optimization. Evaluated across 12 benchmark tasks—including cancer diagnosis, subtyping, and survival analysis—MHIM consistently outperforms state-of-the-art methods, delivering significant improvements in discriminative performance and inference efficiency. The source code is publicly available.
📝 Abstract
Digitizing pathological images into gigapixel Whole Slide Images (WSIs) has opened new avenues for Computational Pathology (CPath). As positive tissue comprises only a small fraction of gigapixel WSIs, existing Multiple Instance Learning (MIL) methods typically focus on identifying salient instances via attention mechanisms. However, this leads to a bias towards easy-to-classify instances while neglecting challenging ones. Recent studies have shown that hard examples are crucial for accurately modeling discriminative boundaries. Applying such an idea at the instance level, we elaborate a novel MIL framework with masked hard instance mining (MHIM-MIL), which utilizes a Siamese structure with a consistency constraint to explore the hard instances. Using a class-aware instance probability, MHIM-MIL employs a momentum teacher to mask salient instances and implicitly mine hard instances for training the student model. To obtain diverse, non-redundant hard instances, we adopt large-scale random masking while utilizing a global recycle network to mitigate the risk of losing key features. Furthermore, the student updates the teacher using an exponential moving average, which identifies new hard instances for subsequent training iterations and stabilizes optimization. Experimental results on cancer diagnosis, subtyping, survival analysis tasks, and 12 benchmarks demonstrate that MHIM-MIL outperforms the latest methods in both performance and efficiency. The code is available at: https://github.com/DearCaat/MHIM-MIL.