π€ AI Summary
Existing video-language models (VLMs) exhibit significant positional bias in multiple-choice question answering (MCQA) evaluation: they blindly favor certain answer positions due to answer-location patterns in training data, inflating performance metrics and misrepresenting true comprehension. This work is the first to systematically identify and characterize such positional bias in video-based MCQA. We propose BOLD, a post-hoc calibration method that requires no additional training or annotation. BOLD decomposes the task to define a fairness-aware metric, then applies answer-position-agnostic probability reweighting and response recalibration. By explicitly incorporating fairness metrics into VLM evaluation, BOLD simultaneously suppresses selection bias and improves both Accuracy and F1 Meanβachieving synergistic bias mitigation and performance enhancement. The approach is computationally efficient, broadly applicable across VLMs, and practical for real-world deployment.
π Abstract
Evaluating Video Language Models (VLMs) is a challenging task. Due to its transparency, Multiple-Choice Question Answering (MCQA) is widely used to measure the performance of these models through accuracy. However, existing MCQA benchmarks fail to capture the full reasoning capabilities of VLMs due to selection bias, when models disproportionately favor certain answer options based on positional patterns observed during training. In this work, we conduct a comprehensive empirical analysis of several VLM architectures across major datasets designed to assess complex video-focused reasoning. We identify where the bias is most pronounced and demonstrate to what extent model responses reflect genuine understanding of video content and related questions, as opposed to reliance on arbitrary patterns or superficial cues, such as answer position. By decomposing the MCQA task and adapting fairness bias metrics to VLMs, we introduce a post-processing calibration technique BOLD to balance this bias. Our results show that reducing selection bias improves not only debiasing metrics but also overall model performance, including Accuracy and F1 Mean score. Our method, by suppressing"blind guessing", offers a more cost- and time-effective approach to mitigating selection bias compared to existing techniques. This study represents the first focused investigation of selection bias in video-to-text LLM-powered models.