🤖 AI Summary
Current video large language models (VideoLLMs) commonly suffer from output repetition, wherein generated content falls into self-reinforcing loops—a stability flaw largely overlooked by existing evaluation frameworks. This work proposes VideoSTF, the first systematic formalization of this issue, introducing three n-gram–based repetition metrics, a benchmark comprising 10,000 diverse videos, and a library of controllable temporal transformations. Using these tools, we conduct comprehensive evaluations—including prevalence analysis, temporal stress testing, and black-box adversarial probing—across ten prominent VideoLLMs. Our findings reveal that output repetition is highly sensitive to minute temporal perturbations and can be reliably triggered by simple transformations, exposing a critical security vulnerability. These results underscore repetition as a fundamental stability defect and advocate for stability-aware evaluation paradigms in video-language modeling.
📝 Abstract
Video Large Language Models (VideoLLMs) have recently achieved strong performance in video understanding tasks. However, we identify a previously underexplored generation failure: severe output repetition, where models degenerate into self-reinforcing loops of repeated phrases or sentences. This failure mode is not captured by existing VideoLLM benchmarks, which focus primarily on task accuracy and factual correctness. We introduce VideoSTF, the first framework for systematically measuring and stress-testing output repetition in VideoLLMs. VideoSTF formalizes repetition using three complementary n-gram-based metrics and provides a standardized testbed of 10,000 diverse videos together with a library of controlled temporal transformations. Using VideoSTF, we conduct pervasive testing, temporal stress testing, and adversarial exploitation across 10 advanced VideoLLMs. We find that output repetition is widespread and, critically, highly sensitive to temporal perturbations of video inputs. Moreover, we show that simple temporal transformations can efficiently induce repetitive degeneration in a black-box setting, exposing output repetition as an exploitable security vulnerability. Our results reveal output repetition as a fundamental stability issue in modern VideoLLMs and motivate stability-aware evaluation for video-language systems. Our evaluation code and scripts are available at: https://github.com/yuxincao22/VideoSTF_benchmark.