🤖 AI Summary
Human annotation uncertainty (HU) in visual question answering (VQA) degrades supervised fine-tuning (SFT) performance, impairs model calibration, and incurs prohibitive costs for large-scale manual labeling. This paper presents the first systematic modeling of HU’s impact on SFT and introduces HaDola—a four-stage iterative framework that (1) identifies and filters harmful samples via uncertainty-aware selection, (2) triggers self-labeling upon prediction error detection, (3) guides annotation with few-shot prompts, and (4) dynamically refines the training data distribution. HaDola substantially reduces reliance on human annotations: on VQAv2 and VizWiz, it achieves or surpasses state-of-the-art accuracy using significantly fewer labeled samples, while simultaneously improving answer correctness and probabilistic calibration. The approach establishes a novel paradigm for efficient, HU-aware VQA training.
📝 Abstract
Large vision-language models (VLMs) achieve strong performance in Visual Question Answering but still rely heavily on supervised fine-tuning (SFT) with massive labeled datasets, which is costly due to human annotations. Crucially, real-world datasets often exhibit human uncertainty (HU) -- variation in human confidence across annotations -- but standard SFT simply optimizes toward the most frequent label, disregarding HU distributions. This leaves two open questions: How does HU affect SFT, and how can HU be effectively leveraged in training? In this work, we first conduct a systematic evaluation of VLMs across varying HU levels. We have two key findings: (i) surprisingly, high-HU samples contribute little or even degrade model performance, and (ii) naively training on the full dataset yields under-calibrated models that fail to capture HU distributions. Motivated by these findings, we introduce HaDola, a human uncertainty-aware data selection and automatic labeling framework. HaDola operates in four stages -- discriminate, self-annotate, error trigger, and training -- to iteratively identify harmful samples, prioritize informative ones, and bootstrap from a small seed set (5% of data). Our approach substantially reduces reliance on costly HU annotations and makes VLMs more accurate and better calibrated. Extensive experiments on VQAv2 and VizWiz datasets demonstrate that HaDola consistently matches or outperforms state-of-the-art baselines with less training data. Our work highlights the importance of explicitly modeling HU in SFT, suggesting that better utilization of HU is more effective than merely scaling up dataset size.