Similarity-as-Evidence: Calibrating Overconfident VLMs for Interpretable and Label-Efficient Medical Active Learning

πŸ“… 2026-02-21
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
This work addresses sample selection bias and cold-start challenges in medical active learning caused by overconfident vision-language models (VLMs). The authors propose the Similarity-as-Evidence framework, which interprets text–image similarity vectors as evidence and models label uncertainty via a Dirichlet distribution. Cognitive uncertainty is quantified using vacuity and dissonance measures. A two-stage acquisition strategy is introduced: in the early stage, samples with high uncertainty are prioritized to mitigate cold-start effects, while in the later stage, highly conflicting samples are selected to enhance discriminative performance. Evaluated across ten medical imaging datasets, the method achieves a macro-averaged accuracy of 82.57% using only 20% of the labeling budget, and attains a low negative log-likelihood of 0.425 on BTMRI, significantly outperforming existing approaches while offering both interpretability and label efficiency.

Technology Category

Application Category

πŸ“ Abstract
Active Learning (AL) reduces annotation costs in medical imaging by selecting only the most informative samples for labeling, but suffers from cold-start when labeled data are scarce. Vision-Language Models (VLMs) address the cold-start problem via zero-shot predictions, yet their temperature-scaled softmax outputs treat text-image similarities as deterministic scores while ignoring inherent uncertainty, leading to overconfidence. This overconfidence misleads sample selection, wasting annotation budgets on uninformative cases. To overcome these limitations, the Similarity-as-Evidence (SaE) framework calibrates text-image similarities by introducing a Similarity Evidence Head (SEH), which reinterprets the similarity vector as evidence and parameterizes a Dirichlet distribution over labels. In contrast to a standard softmax that enforces confident predictions even under weak signals, the Dirichlet formulation explicitly quantifies lack of evidence (vacuity) and conflicting evidence (dissonance), thereby mitigating overconfidence caused by rigid softmax normalization. Building on this, SaE employs a dual-factor acquisition strategy: high-vacuity samples (e.g., rare diseases) are prioritized in early rounds to ensure coverage, while high-dissonance samples (e.g., ambiguous diagnoses) are prioritized later to refine boundaries, providing clinically interpretable selection rationales. Experiments on ten public medical imaging datasets with a 20% label budget show that SaE attains state-of-the-art macro-averaged accuracy of 82.57%. On the representative BTMRI dataset, SaE also achieves superior calibration, with a negative log-likelihood (NLL) of 0.425.
Problem

Research questions and friction points this paper is trying to address.

Active Learning
Vision-Language Models
Overconfidence
Medical Imaging
Uncertainty Calibration
Innovation

Methods, ideas, or system contributions that make the work stand out.

Similarity-as-Evidence
Dirichlet calibration
active learning
vision-language models
uncertainty quantification
πŸ”Ž Similar Papers
No similar papers found.
Z
Zhuofan Xie
School of Electronic Technology and Engineering, Xiamen University, Xiamen, China
Z
Zishan Lin
School of Electronic Technology and Engineering, Xiamen University, Xiamen, China
J
Jinliang Lin
School of Informatics, Xiamen University, Xiamen, China
Jie Qi
Jie Qi
MIT Media Lab
Shaohua Hong
Shaohua Hong
Xiamen University
image compression and processingjoint source and channel codingnonlinear signal processing
Shuo Li
Shuo Li
Fellow of SPIE, AIMBE, AAIA, IET, and IAMBE; Chair Professor, Case Western Reserve University
Artificial IntelligenceVision-Language ModelMachine LearningMedical Image Analysis