🤖 AI Summary
Existing video benchmarks suffer from conceptual, temporal, and data-related biases—such as object or single-frame reliance—leading to distorted model evaluation. To address this, we propose UTD, the first video debiasing benchmark. Our method introduces frame-level structured textual descriptions automatically generated by vision-language models (VLMs) and large language models (LLMs), and establishes a three-dimensional bias analysis framework distinguishing conceptual, temporal, and commonsense biases from data biases. We systematically analyze 12 mainstream video datasets to identify and mitigate bias sources. We release the first structured description set (UTD-descriptions) and debiased test splits (UTD-splits). Zero-shot evaluation across 30 state-of-the-art video models reveals that most heavily rely on spurious single-frame or object-level cues, confirming UTD’s effectiveness and necessity for assessing genuine spatiotemporal reasoning capabilities.
📝 Abstract
We propose a new"Unbiased through Textual Description (UTD)"video benchmark based on unbiased subsets of existing video classification and retrieval datasets to enable a more robust assessment of video understanding capabilities. Namely, we tackle the problem that current video benchmarks may suffer from different representation biases, e.g., object bias or single-frame bias, where mere recognition of objects or utilization of only a single frame is sufficient for correct prediction. We leverage VLMs and LLMs to analyze and debias benchmarks from such representation biases. Specifically, we generate frame-wise textual descriptions of videos, filter them for specific information (e.g. only objects) and leverage them to examine representation biases across three dimensions: 1) concept bias - determining if a specific concept (e.g., objects) alone suffice for prediction; 2) temporal bias - assessing if temporal information contributes to prediction; and 3) common sense vs. dataset bias - evaluating whether zero-shot reasoning or dataset correlations contribute to prediction. We conduct a systematic analysis of 12 popular video classification and retrieval datasets and create new object-debiased test splits for these datasets. Moreover, we benchmark 30 state-of-the-art video models on original and debiased splits and analyze biases in the models. To facilitate the future development of more robust video understanding benchmarks and models, we release:"UTD-descriptions", a dataset with our rich structured descriptions for each dataset, and"UTD-splits", a dataset of object-debiased test splits.