🤖 AI Summary
Existing Video-LLMs exhibit severe hallucination and omission in open-ended text generation (e.g., video captioning), yet standard multiple-choice benchmarks inadequately assess these issues. Method: We introduce ARGUS, the first benchmark dedicated to evaluating free-form video description generation, featuring a novel “factuality–completeness” dual-dimension automated evaluation framework that jointly quantifies hallucination rate (false statements) and omission rate (missing critical details). Ground-truth annotations enable fine-grained scoring via semantic alignment and temporal relation verification. Contribution/Results: Experiments across 12 state-of-the-art Video-LLMs reveal that hallucination rates in open-ended generation are 3.2× higher than in multiple-choice tasks—highlighting substantial real-world deployment risks. ARGUS establishes a scalable, reproducible paradigm for assessing the trustworthiness of video foundation models.
📝 Abstract
Video large language models have not yet been widely deployed, largely due to their tendency to hallucinate. Typical benchmarks for Video-LLMs rely simply on multiple-choice questions. Unfortunately, VideoLLMs hallucinate far more aggressively on freeform text generation tasks like video captioning than they do on multiple choice verification tasks. To address this weakness, we propose ARGUS, a VideoLLM benchmark that measures freeform video captioning performance. By comparing VideoLLM outputs to human ground truth captions, ARGUS quantifies dual metrics. First, we measure the rate of hallucinations in the form of incorrect statements about video content or temporal relationships. Second, we measure the rate at which the model omits important descriptive details. Together, these dual metrics form a comprehensive view of video captioning performance.