🤖 AI Summary
This work identifies, for the first time, a critical security vulnerability in prompt-guided sampling mechanisms of Video Large Language Models (VideoLLMs): susceptibility to black-box poisoning attacks. To exploit this, we propose PoisonVID—the first dedicated attack framework—that employs closed-loop optimization leveraging a surrogate VideoLLM and a lightweight language model (e.g., GPT-4o-mini) to generate universal textual perturbations. These perturbations implicitly degrade frame-relevance scores of key frames, thereby compromising video understanding. Crucially, the attack constructs a descriptive set by rewriting harmful prompts, enabling stealthy manipulation of the sampling process. Extensive evaluations across three state-of-the-art VideoLLMs and three mainstream prompt-guided sampling strategies demonstrate attack success rates of 82%–99%. This is the first systematic empirical evidence revealing severe security flaws in prompt-guided sampling, providing both a critical warning and a foundational benchmark for developing robust sampling mechanisms.
📝 Abstract
Video Large Language Models (VideoLLMs) have emerged as powerful tools for understanding videos, supporting tasks such as summarization, captioning, and question answering. Their performance has been driven by advances in frame sampling, progressing from uniform-based to semantic-similarity-based and, most recently, prompt-guided strategies. While vulnerabilities have been identified in earlier sampling strategies, the safety of prompt-guided sampling remains unexplored. We close this gap by presenting PoisonVID, the first black-box poisoning attack that undermines prompt-guided sampling in VideoLLMs. PoisonVID compromises the underlying prompt-guided sampling mechanism through a closed-loop optimization strategy that iteratively optimizes a universal perturbation to suppress harmful frame relevance scores, guided by a depiction set constructed from paraphrased harmful descriptions leveraging a shadow VideoLLM and a lightweight language model, i.e., GPT-4o-mini. Comprehensively evaluated on three prompt-guided sampling strategies and across three advanced VideoLLMs, PoisonVID achieves 82% - 99% attack success rate, highlighting the importance of developing future advanced sampling strategies for VideoLLMs.