GMAT: Grounded Multi-Agent Clinical Description Generation for Text Encoder in Vision-Language MIL for Whole Slide Image Classification

📅 2025-08-02
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Current vision-language models for whole-slide image (WSI) classification suffer from limited text-encoding capability—relying either on handcrafted single prompts or large language model (LLM)-generated clinical descriptions—which lack medical specificity, exhibit coarse semantic granularity, and achieve weak alignment with visual features. Method: We propose a multi-agent collaborative framework for generating fine-grained, clinically grounded textual descriptions. It integrates pathology textbook knowledge with domain-expert agents to autonomously produce structured, verifiable description lists—replacing fixed-length prompts—and combines multi-instance learning with list-wise text encoding to strengthen vision–language semantic alignment. Contribution/Results: Evaluated on renal and lung cancer WSI datasets, our method surpasses single-prompt baselines and achieves state-of-the-art performance, demonstrating superior medical accuracy and cross-dataset generalizability.

Technology Category

Application Category

📝 Abstract
Multiple Instance Learning (MIL) is the leading approach for whole slide image (WSI) classification, enabling efficient analysis of gigapixel pathology slides. Recent work has introduced vision-language models (VLMs) into MIL pipelines to incorporate medical knowledge through text-based class descriptions rather than simple class names. However, when these methods rely on large language models (LLMs) to generate clinical descriptions or use fixed-length prompts to represent complex pathology concepts, the limited token capacity of VLMs often constrains the expressiveness and richness of the encoded class information. Additionally, descriptions generated solely by LLMs may lack domain grounding and fine-grained medical specificity, leading to suboptimal alignment with visual features. To address these challenges, we propose a vision-language MIL framework with two key contributions: (1) A grounded multi-agent description generation system that leverages curated pathology textbooks and agent specialization (e.g., morphology, spatial context) to produce accurate and diverse clinical descriptions; (2) A text encoding strategy using a list of descriptions rather than a single prompt, capturing fine-grained and complementary clinical signals for better alignment with visual features. Integrated into a VLM-MIL pipeline, our approach shows improved performance over single-prompt class baselines and achieves results comparable to state-of-the-art models, as demonstrated on renal and lung cancer datasets.
Problem

Research questions and friction points this paper is trying to address.

Limited token capacity in VLMs restricts class information richness
LLM-generated descriptions lack medical specificity and visual alignment
Fixed-length prompts fail to represent complex pathology concepts
Innovation

Methods, ideas, or system contributions that make the work stand out.

Grounded multi-agent system for clinical descriptions
List-based text encoding for fine-grained signals
Integration of curated pathology textbooks
🔎 Similar Papers
No similar papers found.