Detecting Performance Degradation under Data Shift in Pathology Vision-Language Model

📅 2026-01-02
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the performance degradation of pathological vision-language models (VLMs) under distribution shifts in clinical deployment, a challenge exacerbated by the absence of effective label-free degradation detection mechanisms. To this end, the authors propose a unified monitoring framework that jointly leverages input-level data shift detection and output-level prediction confidence analysis to identify performance deterioration without requiring ground-truth labels. The approach innovatively integrates multi-source unsupervised shift indicators with dynamic confidence metrics within DomainSAT, a lightweight visualization tool. Extensive experiments on large-scale histopathological tumor classification datasets demonstrate that the proposed framework reliably and interpretably detects VLM performance degradation under distributional shifts, thereby significantly enhancing the robustness and trustworthiness of clinical deployment.

Technology Category

Application Category

📝 Abstract
Vision-Language Models have demonstrated strong potential in medical image analysis and disease diagnosis. However, after deployment, their performance may deteriorate when the input data distribution shifts from that observed during development. Detecting such performance degradation is essential for clinical reliability, yet remains challenging for large pre-trained VLMs operating without labeled data. In this study, we investigate performance degradation detection under data shift in a state-of-the-art pathology VLM. We examine both input-level data shift and output-level prediction behavior to understand their respective roles in monitoring model reliability. To facilitate systematic analysis of input data shift, we develop DomainSAT, a lightweight toolbox with a graphical interface that integrates representative shift detection algorithms and enables intuitive exploration of data shift. Our analysis shows that while input data shift detection is effective at identifying distributional changes and providing early diagnostic signals, it does not always correspond to actual performance degradation. Motivated by this observation, we further study output-based monitoring and introduce a label-free, confidence-based degradation indicator that directly captures changes in model prediction confidence. We find that this indicator exhibits a close relationship with performance degradation and serves as an effective complement to input shift detection. Experiments on a large-scale pathology dataset for tumor classification demonstrate that combining input data shift detection and output confidence-based indicators enables more reliable detection and interpretation of performance degradation in VLMs under data shift. These findings provide a practical and complementary framework for monitoring the reliability of foundation models in digital pathology.
Problem

Research questions and friction points this paper is trying to address.

performance degradation
data shift
vision-language model
pathology
model reliability
Innovation

Methods, ideas, or system contributions that make the work stand out.

data shift detection
vision-language model
confidence-based monitoring
DomainSAT
performance degradation
🔎 Similar Papers
No similar papers found.