HookMIL: Revisiting Context Modeling in Multiple Instance Learning for Computational Pathology

📅 2025-12-20
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Weakly supervised analysis of whole-slide images (WSIs) in computational pathology faces two key challenges: conventional multi-instance learning (MIL) methods neglect local–global contextual relationships, while Transformer-based approaches suffer from quadratic computational complexity and redundant token interactions. Method: We propose HookMIL—a novel MIL framework featuring learnable hook tokens initialized from multimodal (visual, textual, spatial) priors; a hook diversity loss to encourage structural specialization; lightweight inter-token communication for context refinement; and linear-complexity bidirectional attention for efficient global aggregation. Contribution/Results: HookMIL achieves state-of-the-art performance on four public pathological benchmark datasets. It significantly accelerates inference speed compared to Transformer baselines while generating diagnostically informative heatmaps with enhanced biological interpretability—demonstrating structured, specialized, and low-redundancy contextual modeling without compromising accuracy.

Technology Category

Application Category

📝 Abstract
Multiple Instance Learning (MIL) has enabled weakly supervised analysis of whole-slide images (WSIs) in computational pathology. However, traditional MIL approaches often lose crucial contextual information, while transformer-based variants, though more expressive, suffer from quadratic complexity and redundant computations. To address these limitations, we propose HookMIL, a context-aware and computationally efficient MIL framework that leverages compact, learnable hook tokens for structured contextual aggregation. These tokens can be initialized from (i) key-patch visual features, (ii) text embeddings from vision-language pathology models, and (iii) spatially grounded features from spatial transcriptomics-vision models. This multimodal initialization enables Hook Tokens to incorporate rich textual and spatial priors, accelerating convergence and enhancing representation quality. During training, Hook tokens interact with instances through bidirectional attention with linear complexity. To further promote specialization, we introduce a Hook Diversity Loss that encourages each token to focus on distinct histopathological patterns. Additionally, a hook-to-hook communication mechanism refines contextual interactions while minimizing redundancy. Extensive experiments on four public pathology datasets demonstrate that HookMIL achieves state-of-the-art performance, with improved computational efficiency and interpretability. Codes are available at https://github.com/lingxitong/HookMIL.
Problem

Research questions and friction points this paper is trying to address.

Addresses loss of contextual information in Multiple Instance Learning for pathology.
Reduces quadratic complexity and redundancy in transformer-based MIL approaches.
Enhances computational efficiency and interpretability in whole-slide image analysis.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Hook tokens for structured contextual aggregation
Multimodal initialization with textual and spatial priors
Linear complexity attention with diversity loss mechanism
🔎 Similar Papers
No similar papers found.
Xitong Ling
Xitong Ling
Tsinghua University
AI4PathologyFoundation-ModelVision-Language-Model
Minxi Ouyang
Minxi Ouyang
Tsinghua University
cvpathology
X
Xiaoxiao Li
Shenzhen International Graduate School, Tsinghua University
J
Jiawen Li
Shenzhen International Graduate School, Tsinghua University
Y
Ying Chen
School of Informatics, Xiamen University
Y
Yuxuan Sun
School of Engineering, Westlake University
Xinrui Chen
Xinrui Chen
Tsinghua University
Efficient Deep LearningComputer Vision
T
Tian Guan
Shenzhen International Graduate School, Tsinghua University
X
Xiaoping Liu
Zhongnan Hospital, Wuhan University
Yonghong He
Yonghong He
清华大学深圳国际研究生院
生物医学工程,光学成像,AI图像处理、病理大模型