π€ AI Summary
This work addresses sample selection bias and cold-start challenges in medical active learning caused by overconfident vision-language models (VLMs). The authors propose the Similarity-as-Evidence framework, which interprets textβimage similarity vectors as evidence and models label uncertainty via a Dirichlet distribution. Cognitive uncertainty is quantified using vacuity and dissonance measures. A two-stage acquisition strategy is introduced: in the early stage, samples with high uncertainty are prioritized to mitigate cold-start effects, while in the later stage, highly conflicting samples are selected to enhance discriminative performance. Evaluated across ten medical imaging datasets, the method achieves a macro-averaged accuracy of 82.57% using only 20% of the labeling budget, and attains a low negative log-likelihood of 0.425 on BTMRI, significantly outperforming existing approaches while offering both interpretability and label efficiency.
π Abstract
Active Learning (AL) reduces annotation costs in medical imaging by selecting only the most informative samples for labeling, but suffers from cold-start when labeled data are scarce. Vision-Language Models (VLMs) address the cold-start problem via zero-shot predictions, yet their temperature-scaled softmax outputs treat text-image similarities as deterministic scores while ignoring inherent uncertainty, leading to overconfidence. This overconfidence misleads sample selection, wasting annotation budgets on uninformative cases. To overcome these limitations, the Similarity-as-Evidence (SaE) framework calibrates text-image similarities by introducing a Similarity Evidence Head (SEH), which reinterprets the similarity vector as evidence and parameterizes a Dirichlet distribution over labels. In contrast to a standard softmax that enforces confident predictions even under weak signals, the Dirichlet formulation explicitly quantifies lack of evidence (vacuity) and conflicting evidence (dissonance), thereby mitigating overconfidence caused by rigid softmax normalization. Building on this, SaE employs a dual-factor acquisition strategy: high-vacuity samples (e.g., rare diseases) are prioritized in early rounds to ensure coverage, while high-dissonance samples (e.g., ambiguous diagnoses) are prioritized later to refine boundaries, providing clinically interpretable selection rationales. Experiments on ten public medical imaging datasets with a 20% label budget show that SaE attains state-of-the-art macro-averaged accuracy of 82.57%. On the representative BTMRI dataset, SaE also achieves superior calibration, with a negative log-likelihood (NLL) of 0.425.