Beyond Linearity: Squeeze-and-Recalibrate Blocks for Few-Shot Whole Slide Image Classification

📅 2025-05-21
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address expert annotation scarcity, overfitting in few-shot learning, and discriminative feature distortion in computational pathology, this paper proposes a lightweight, plug-and-play Squeeze-and-Recalibrate (SR) module to replace the linear layer in conventional multiple-instance learning (MIL) models. The SR module introduces a novel dual-path mechanism: (i) a trainable low-rank compression path for effective linear mapping approximation, and (ii) a frozen random recalibration path to preserve feature geometric structure—both theoretically guaranteeing approximability. Unlike prior approaches, SR requires no architectural modifications, additional preprocessing, or reliance on vision-language models, significantly reducing computational overhead. Evaluated across multiple whole-slide image (WSI) benchmarks, SR consistently outperforms state-of-the-art few-shot MIL methods while reducing parameter count by over 60%; crucially, its performance lower bound is guaranteed by the original model’s baseline.

Technology Category

Application Category

📝 Abstract
Deep learning has advanced computational pathology but expert annotations remain scarce. Few-shot learning mitigates annotation burdens yet suffers from overfitting and discriminative feature mischaracterization. In addition, the current few-shot multiple instance learning (MIL) approaches leverage pretrained vision-language models to alleviate these issues, but at the cost of complex preprocessing and high computational cost. We propose a Squeeze-and-Recalibrate (SR) block, a drop-in replacement for linear layers in MIL models to address these challenges. The SR block comprises two core components: a pair of low-rank trainable matrices (squeeze pathway, SP) that reduces parameter count and imposes a bottleneck to prevent spurious feature learning, and a frozen random recalibration matrix that preserves geometric structure, diversifies feature directions, and redefines the optimization objective for the SP. We provide theoretical guarantees that the SR block can approximate any linear mapping to arbitrary precision, thereby ensuring that the performance of a standard MIL model serves as a lower bound for its SR-enhanced counterpart. Extensive experiments demonstrate that our SR-MIL models consistently outperform prior methods while requiring significantly fewer parameters and no architectural changes.
Problem

Research questions and friction points this paper is trying to address.

Addresses few-shot learning overfitting in pathology images
Reduces computational cost in multiple instance learning
Enhances feature learning with Squeeze-and-Recalibrate blocks
Innovation

Methods, ideas, or system contributions that make the work stand out.

Squeeze-and-Recalibrate block replaces linear layers
Low-rank matrices reduce parameters and overfitting
Frozen recalibration matrix preserves geometric structure
🔎 Similar Papers
2024-08-15European Conference on Computer VisionCitations: 0
C
Conghao Xiong
CUHK
Z
Zhengrui Guo
HKUST
Z
Zhe Xu
CUHK
Y
Yifei Zhang
NTU
R
Raymond Kai-Yu Tong
CUHK
Si Yong Yeo
Si Yong Yeo
Asst. Professor, Nanyang Technological University
Computer VisionMedical InformaticsArtificial IntelligenceMedical ImagingMedical Devices
H
Hao Chen
HKUST
J
Joseph J. Y. Sung
Lee Kong Chian School of Medicine, NTU
Irwin King
Irwin King
The Chinese University of Hong Kong
social computingmachine learningAIgraph neural networksNLP