🤖 AI Summary
This work addresses a critical limitation in current vision-language models (VLMs) for long-form video understanding: the frequent absence of key frames forces models to guess answers, distorting performance evaluation and obscuring their ability to behave honestly under uncertainty. To tackle this, we introduce VirtueBench, the first benchmark systematically evaluating model refusal behavior. By leveraging multi-granularity frame sampling to distinguish answerable from unanswerable questions—and integrating answerability annotations with tailored prompting—we evaluate 25 leading VLMs. Results show that while the best-performing model achieves over 70% refusal accuracy, most models exhibit significantly degraded performance without explicit refusal instructions, revealing a substantial gap in trustworthy reasoning. This study advocates shifting the evaluation paradigm from mere accuracy toward honesty and reliability.
📝 Abstract
Recent Vision-Language Models (VLMs) have made remarkable progress in multimodal understanding tasks, yet their evaluation on long video understanding remains unreliable. Due to limited frame inputs, key frames necessary for answering the question may be missing from the model's input. However, models that truthfully refuse to answer under such uncertainty are marked as incorrect, while those that guess may coincidentally produce the correct answer and thus obtain deceptively higher accuracy, leading to misleading evaluation results and encouraging models to guess rather than respond honestly. To address this issue, we introduce VirtueBench, a benchmark explicitly designed to assess model trustworthiness under uncertainty. VirtueBench constructs multiple frame-sampling levels for each video and provides ground truths that distinguish between answerable and unanswerable cases. Evaluations on 25 open-source and commercial VLMs reveal distinct refusal behaviors across different model families, with refusal accuracy ranging from over 70% in the best models to nearly 0% in the worst. Moreover, most models exhibit a substantial drop in refusal when the prompt does not explicitly require them to do so. These findings highlight the need for developing trustworthy VLMs for multimodal understanding, guided by benchmarks and leaderboards that emphasize reliability and trustworthiness.