🤖 AI Summary
In computational pathology, weak instance-level representations and insufficient contextual modeling in whole-slide image (WSI) multiple-instance learning (MIL) arise from reliance solely on bag-level supervision. To address this, we propose a Coarse-to-Fine Self-Distillation (CFSD) paradigm that, for the first time, automatically transforms coarse bag-level supervision into high-quality instance-level supervision. We further introduce 2D positional encoding (2DPE) to explicitly model spatial relationships among instances. Theoretically, CFSD guarantees instance learnability. Evaluated on three benchmarks—TCGA-NSCLC, CAMELYON16, and breast cancer receptor status prediction—our method achieves state-of-the-art performance: AUCs of 0.9152 and 0.8524 for estrogen and progesterone receptor prediction, 0.9618 for histological subtype classification, and 0.8634 for tumor detection.
📝 Abstract
Multiple Instance Learning (MIL) for whole slide image (WSI) analysis in computational pathology often neglects instance-level learning as supervision is typically provided only at the bag level. In this work, we present PathMIL, a framework designed to improve MIL through two perspectives: (1) employing instance-level supervision and (2) learning inter-instance contextual information on bag level. Firstly, we propose a novel Coarse-to-Fine Self-Distillation (CFSD) paradigm, to probe and distil a classifier trained with bag-level information to obtain instance-level labels which could effectively provide the supervision for the same classifier in a finer way. Secondly, to capture inter-instance contextual information in WSI, we propose Two-Dimensional Positional Encoding (2DPE), which encodes the spatial appearance of instances within a bag. We also theoretically and empirically prove the instance-level learnability of CFSD. PathMIL is evaluated on multiple benchmarking tasks, including subtype classification (TCGA-NSCLC), tumour classification (CAMELYON16), and an internal benchmark for breast cancer receptor status classification. Our method achieves state-of-the-art performance, with AUC scores of 0.9152 and 0.8524 for estrogen and progesterone receptor status classification, respectively, an AUC of 0.9618 for subtype classification, and 0.8634 for tumour classification, surpassing existing methods.