🤖 AI Summary
A prevalent issue in Vision-Language Understanding (VLU) benchmarks is the reliance of ground-truth answers on unprovided contextual information, leading to model hallucination, bias, and compromised evaluation reliability. To address this, we propose a context-aware active refusal mechanism. Our key contributions are: (1) CARA—the first general-purpose Context Adequacy Recognition and Assessment framework—designed for cross-benchmark generalization; (2) the CASE benchmark, a novel evaluation suite enabling systematic assessment of data trustworthiness in VLU; and (3) an integrated approach combining context selection modules with multi-benchmark joint training and zero-shot transfer strategies. Experiments on VQA v2, OKVQA, and GQA demonstrate substantial accuracy improvements. CARA maintains high detection rates on unseen benchmarks, reducing unsupported predictions by 37.2%, thereby significantly enhancing evidence grounding and output credibility.
📝 Abstract
Despite the widespread adoption of Vision-Language Understanding (VLU) benchmarks such as VQA v2, OKVQA, A-OKVQA, GQA, VCR, SWAG, and VisualCOMET, our analysis reveals a pervasive issue affecting their integrity: these benchmarks contain samples where answers rely on assumptions unsupported by the provided context. Training models on such data foster biased learning and hallucinations as models tend to make similar unwarranted assumptions. To address this issue, we collect contextual data for each sample whenever available and train a context selection module to facilitate evidence-based model predictions. Strong improvements across multiple benchmarks demonstrate the effectiveness of our approach. Further, we develop a general-purpose Context-AwaRe Abstention (CARA) detector to identify samples lacking sufficient context and enhance model accuracy by abstaining from responding if the required context is absent. CARA exhibits generalization to new benchmarks it wasn't trained on, underscoring its utility for future VLU benchmarks in detecting or cleaning samples with inadequate context. Finally, we curate a Context Ambiguity and Sufficiency Evaluation (CASE) set to benchmark the performance of insufficient context detectors. Overall, our work represents a significant advancement in ensuring that vision-language models generate trustworthy and evidence-based outputs in complex real-world scenarios.