MUSE: Harnessing Precise and Diverse Semantics for Few-Shot Whole Slide Image Classification

📅 2026-02-24
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge of few-shot classification in computational pathology, where expert annotations for whole-slide images (WSIs) are scarce. The authors propose a visual–language learning framework enhanced with sample-level semantic augmentation. By introducing Sample-level Fine-grained Semantic Enhancement (SFSE) and Stochastic Multi-view Model Optimization (SMMO), the method dynamically generates diverse and precise image–text descriptions, overcoming the limitations of conventional class-level static priors. An adaptive visual–semantic interaction is achieved through a Mixture-of-Experts (MoE) architecture, while a large language model constructs a category-level pathological knowledge base. Dynamic supervision is provided by stochastically retrieving and fusing multi-view textual descriptions. Evaluated on three WSI benchmark datasets, the approach significantly outperforms existing visual–language methods, demonstrating the effectiveness and generalizability of sample-aware semantic optimization for few-shot pathological classification.

Technology Category

Application Category

📝 Abstract
In computational pathology, few-shot whole slide image classification is primarily driven by the extreme scarcity of expert-labeled slides. Recent vision-language methods incorporate textual semantics generated by large language models, but treat these descriptions as static class-level priors that are shared across all samples and lack sample-wise refinement. This limits both the diversity and precision of visual-semantic alignment, hindering generalization under limited supervision. To overcome this, we propose the stochastic MUlti-view Semantic Enhancement (MUSE), a framework that first refines semantic precision via sample-wise adaptation and then enhances semantic richness through retrieval-augmented multi-view generation. Specifically, MUSE introduces Sample-wise Fine-grained Semantic Enhancement (SFSE), which yields a fine-grained semantic prior for each sample through MoE-based adaptive visual-semantic interaction. Guided by this prior, Stochastic Multi-view Model Optimization (SMMO) constructs an LLM-generated knowledge base of diverse pathological descriptions per class, then retrieves and stochastically integrates multiple matched textual views during training. These dynamically selected texts serve as enriched semantic supervisions to stochastically optimize the vision-language model, promoting robustness and mitigating overfitting. Experiments on three benchmark WSI datasets show that MUSE consistently outperforms existing vision-language baselines in few-shot settings, demonstrating that effective few-shot pathology learning requires not only richer semantic sources but also their active and sample-aware semantic optimization. Our code is available at: https://github.com/JiahaoXu-god/CVPR2026_MUSE.
Problem

Research questions and friction points this paper is trying to address.

few-shot learning
whole slide image classification
vision-language alignment
semantic precision
computational pathology
Innovation

Methods, ideas, or system contributions that make the work stand out.

few-shot learning
vision-language model
semantic enhancement
multi-view retrieval
computational pathology
🔎 Similar Papers
No similar papers found.