Addressing Blind Guessing: Calibration of Selection Bias in Multiple-Choice Question Answering by Video Language Models

πŸ“… 2024-10-18
πŸ›οΈ arXiv.org
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
Existing video-language models (VLMs) exhibit significant positional bias in multiple-choice question answering (MCQA) evaluation: they blindly favor certain answer positions due to answer-location patterns in training data, inflating performance metrics and misrepresenting true comprehension. This work is the first to systematically identify and characterize such positional bias in video-based MCQA. We propose BOLD, a post-hoc calibration method that requires no additional training or annotation. BOLD decomposes the task to define a fairness-aware metric, then applies answer-position-agnostic probability reweighting and response recalibration. By explicitly incorporating fairness metrics into VLM evaluation, BOLD simultaneously suppresses selection bias and improves both Accuracy and F1 Meanβ€”achieving synergistic bias mitigation and performance enhancement. The approach is computationally efficient, broadly applicable across VLMs, and practical for real-world deployment.

Technology Category

Application Category

πŸ“ Abstract
Evaluating Video Language Models (VLMs) is a challenging task. Due to its transparency, Multiple-Choice Question Answering (MCQA) is widely used to measure the performance of these models through accuracy. However, existing MCQA benchmarks fail to capture the full reasoning capabilities of VLMs due to selection bias, when models disproportionately favor certain answer options based on positional patterns observed during training. In this work, we conduct a comprehensive empirical analysis of several VLM architectures across major datasets designed to assess complex video-focused reasoning. We identify where the bias is most pronounced and demonstrate to what extent model responses reflect genuine understanding of video content and related questions, as opposed to reliance on arbitrary patterns or superficial cues, such as answer position. By decomposing the MCQA task and adapting fairness bias metrics to VLMs, we introduce a post-processing calibration technique BOLD to balance this bias. Our results show that reducing selection bias improves not only debiasing metrics but also overall model performance, including Accuracy and F1 Mean score. Our method, by suppressing"blind guessing", offers a more cost- and time-effective approach to mitigating selection bias compared to existing techniques. This study represents the first focused investigation of selection bias in video-to-text LLM-powered models.
Problem

Research questions and friction points this paper is trying to address.

Identifies selection bias in Video Language Models' multiple-choice answers
Proposes calibration technique BOLD to reduce bias and improve accuracy
Assesses genuine video understanding versus superficial pattern reliance
Innovation

Methods, ideas, or system contributions that make the work stand out.

Calibration technique BOLD reduces selection bias
Decomposing MCQA task for fairness metrics
Improves accuracy and F1 Mean score
πŸ”Ž Similar Papers
No similar papers found.