TC-SSA: Token Compression via Semantic Slot Aggregation for Gigapixel Pathology Reasoning

๐Ÿ“… 2026-03-01
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
This work addresses the challenge of processing gigapixel whole-slide images (WSIs), whose extreme sequence lengths exceed the capacity of standard Transformers, while existing spatial sampling strategies often discard diagnostically critical information. To overcome this, the authors propose a learnable semantic slot aggregation mechanism that compresses massive image patches into a fixed number of semantic slots under a strict token budget, ensuring global coverage and efficient inference. By employing sparse Top-2 routing with gated assignment and weighted feature fusion, the method effectively preserves diagnostically relevant information, surpassing the limitations of conventional sampling approaches. Furthermore, it integrates a visionโ€“language foundation model to enhance representation learning. The approach achieves an overall accuracy of 78.34% on SlideBench (TCGA) and attains MIL classification AUCs of 95.83%, 98.27%, and 79.80% on TCGA-BRCA, NSCLC, and PANDA datasets, respectively.

Technology Category

Application Category

๐Ÿ“ Abstract
The application of large vision-language models to computational pathology holds great promise for diagnostic assistants but faces a critical computational bottleneck: the gigapixel scale of Whole Slide Images (WSIs). A single WSI typically contains over 105 patches, creating sequence lengths that exceed the constraints of standard Transformer architectures. Existing solutions often resort to spatial sampling, which risks discarding diagnostically critical evidence. To address this, we propose TC-SSA (Token Compression via Semantic Slot Aggregation), a learnable token compression framework that aggregates patch features into a fixed number of semantic slots. A gated routing module assigns patches to slots using sparse Top-2 routing, followed by weighted aggregation, enabling global slide coverage under a strict token budget. The resulting representation retains diagnostically relevant information while reducing the number of visual tokens to 1.7% of the original sequence. On the SlideBench(TCGA), our model achieves 78.34% overall accuracy and 77.14% on the diagnosis subset, outperforming sampling-based baselines under comparable token budgets. The method also generalizes to MIL classification, reaching AUC of 95.83% on TCGA-BRCA, 98.27% on TCGA-NSCLC and 79.80% on PANDA. These results suggest that learnable semantic aggregation provides an effective trade-off between efficiency and diagnostic performance for gigapixel pathology reasoning.
Problem

Research questions and friction points this paper is trying to address.

gigapixel pathology
Whole Slide Images
token compression
computational bottleneck
diagnostic information retention
Innovation

Methods, ideas, or system contributions that make the work stand out.

Token Compression
Semantic Slot Aggregation
Gigapixel Pathology
Vision-Language Models
Sparse Routing
Zhuo Chen
Zhuo Chen
Zhejiang University
Knowledge GraphMulti-Modal LearningLarge Language ModelAI4Science
S
Shawn Young
1Shenzhen University of Advanced Technology Shenzhen, Guangdong, China
L
Lijian Xu
Shenzhen University of Advanced Technology Shenzhen, Guangdong, China