Two Frames Matter: A Temporal Attack for Text-to-Video Model Jailbreaking

📅 2026-03-07
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Current text-to-video models may autonomously generate harmful content in intermediate frames when processing seemingly benign yet temporally sparse prompts—such as those specifying only start and end frames—thereby exposing blind spots in conventional input/output filtering mechanisms. This work proposes a temporal attack framework, Temporal Frame Manipulation (TFM), which conceals malicious intent within two-frame sparse prompts through fragmented prompt engineering, implicit semantic substitution, and crafted temporal boundary conditions, thereby inducing models to produce违规 content without explicit sensitive keywords. The approach reveals, for the first time, a critical security vulnerability in the model’s temporal trajectory completion process. Validated across multiple open-source and commercial models, TFM achieves up to a 12% increase in attack success rate, highlighting the inadequacy of existing safety mechanisms in regulating self-generated temporal content.

Technology Category

Application Category

📝 Abstract
Recent text-to-video (T2V) models can synthesize complex videos from lightweight natural language prompts, raising urgent concerns about safety alignment in the event of misuse in the real world. Prior jailbreak attacks typically rewrite unsafe prompts into paraphrases that evade content filters while preserving meaning. Yet, these approaches often still retain explicit sensitive cues in the input text and therefore overlook a more profound, video-specific weakness. In this paper, we identify a temporal trajectory infilling vulnerability of T2V systems under fragmented prompts: when the prompt specifies only sparse boundary conditions (e.g., start and end frames) and leaves the intermediate evolution underspecified, the model may autonomously reconstruct a plausible trajectory that includes harmful intermediate frames, despite the prompt appearing benign to input or output side filtering. Building on this observation, we propose TFM. This fragmented prompting framework converts an originally unsafe request into a temporally sparse two-frame extraction and further reduces overtly sensitive cues via implicit substitution. Extensive evaluations across multiple open-source and commercial T2V models demonstrate that TFM consistently enhances jailbreak effectiveness, achieving up to a 12% increase in attack success rate on commercial systems. Our findings highlight the need for temporally aware safety mechanisms that account for model-driven completion beyond prompt surface form.
Problem

Research questions and friction points this paper is trying to address.

text-to-video
jailbreak
temporal vulnerability
safety alignment
fragmented prompts
Innovation

Methods, ideas, or system contributions that make the work stand out.

temporal jailbreak
text-to-video safety
fragmented prompting
trajectory infilling
two-frame attack
🔎 Similar Papers
M
Moyang Chen
College of Science, Mathematics and Technology, Wenzhou-Kean University
Zonghao Ying
Zonghao Ying
SKLCCSE, BUAA
Trustworthy AI
W
Wenzhuo Xu
360 AI Security Lab
Q
Quancheng Zou
360 AI Security Lab
D
Deyue Zhang
360 AI Security Lab
D
Dongdong Yang
360 AI Security Lab
Xiangzheng Zhang
Xiangzheng Zhang
360
AI safetyLarge language modelsInformation Retrieval