🤖 AI Summary
This study reveals a critical vulnerability in VideoLLMs: severe underdetection of harmful content—even when malicious content is present across all video frames, mainstream models exhibit over 90% omission rates. The root cause lies in three systemic deficiencies: sparse frame sampling, loss of spatial information, and misalignment between visual and textual encoding/decoding. To address this, we propose the first query-free black-box attack framework for VideoLLM security evaluation. It integrates frame-level sampling analysis, token-level dimensionality reduction assessment, and cross-modal decoding correlation testing. Applied to five state-of-the-art VideoLLMs, our method exposes fundamental semantic coverage weaknesses solely through input-output behavioral analysis—without requiring gradient access or internal model knowledge. The framework enables reproducible, low-barrier safety evaluation of video-language models, establishing a novel paradigm for assessing video content safety.
📝 Abstract
Video Large Language Models (VideoLLMs) are increasingly deployed on numerous critical applications, where users rely on auto-generated summaries while casually skimming the video stream. We show that this interaction hides a critical safety gap: if harmful content is embedded in a video, either as full-frame inserts or as small corner patches, state-of-the-art VideoLLMs rarely mention the harmful content in the output, despite its clear visibility to human viewers. A root-cause analysis reveals three compounding design flaws: (1) insufficient temporal coverage resulting from the sparse, uniformly spaced frame sampling used by most leading VideoLLMs, (2) spatial information loss introduced by aggressive token downsampling within sampled frames, and (3) encoder-decoder disconnection, whereby visual cues are only weakly utilized during text generation. Leveraging these insights, we craft three zero-query black-box attacks, aligning with these flaws in the processing pipeline. Our large-scale evaluation across five leading VideoLLMs shows that the harmfulness omission rate exceeds 90% in most cases. Even when harmful content is clearly present in all frames, these models consistently fail to identify it. These results underscore a fundamental vulnerability in current VideoLLMs' designs and highlight the urgent need for sampling strategies, token compression, and decoding mechanisms that guarantee semantic coverage rather than speed alone.