Failures to Surface Harmful Contents in Video Large Language Models

📅 2025-08-14
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study reveals a critical vulnerability in VideoLLMs: severe underdetection of harmful content—even when malicious content is present across all video frames, mainstream models exhibit over 90% omission rates. The root cause lies in three systemic deficiencies: sparse frame sampling, loss of spatial information, and misalignment between visual and textual encoding/decoding. To address this, we propose the first query-free black-box attack framework for VideoLLM security evaluation. It integrates frame-level sampling analysis, token-level dimensionality reduction assessment, and cross-modal decoding correlation testing. Applied to five state-of-the-art VideoLLMs, our method exposes fundamental semantic coverage weaknesses solely through input-output behavioral analysis—without requiring gradient access or internal model knowledge. The framework enables reproducible, low-barrier safety evaluation of video-language models, establishing a novel paradigm for assessing video content safety.

Technology Category

Application Category

📝 Abstract
Video Large Language Models (VideoLLMs) are increasingly deployed on numerous critical applications, where users rely on auto-generated summaries while casually skimming the video stream. We show that this interaction hides a critical safety gap: if harmful content is embedded in a video, either as full-frame inserts or as small corner patches, state-of-the-art VideoLLMs rarely mention the harmful content in the output, despite its clear visibility to human viewers. A root-cause analysis reveals three compounding design flaws: (1) insufficient temporal coverage resulting from the sparse, uniformly spaced frame sampling used by most leading VideoLLMs, (2) spatial information loss introduced by aggressive token downsampling within sampled frames, and (3) encoder-decoder disconnection, whereby visual cues are only weakly utilized during text generation. Leveraging these insights, we craft three zero-query black-box attacks, aligning with these flaws in the processing pipeline. Our large-scale evaluation across five leading VideoLLMs shows that the harmfulness omission rate exceeds 90% in most cases. Even when harmful content is clearly present in all frames, these models consistently fail to identify it. These results underscore a fundamental vulnerability in current VideoLLMs' designs and highlight the urgent need for sampling strategies, token compression, and decoding mechanisms that guarantee semantic coverage rather than speed alone.
Problem

Research questions and friction points this paper is trying to address.

VideoLLMs fail to detect harmful content in videos
Sparse frame sampling causes insufficient temporal coverage
Token downsampling leads to spatial information loss
Innovation

Methods, ideas, or system contributions that make the work stand out.

Sparse uniform frame sampling causes coverage gaps
Aggressive token downsampling loses spatial information
Encoder-decoder disconnection weakens visual cue utilization
🔎 Similar Papers
No similar papers found.