🤖 AI Summary
Current text-to-video models may autonomously generate harmful content in intermediate frames when processing seemingly benign yet temporally sparse prompts—such as those specifying only start and end frames—thereby exposing blind spots in conventional input/output filtering mechanisms. This work proposes a temporal attack framework, Temporal Frame Manipulation (TFM), which conceals malicious intent within two-frame sparse prompts through fragmented prompt engineering, implicit semantic substitution, and crafted temporal boundary conditions, thereby inducing models to produce违规 content without explicit sensitive keywords. The approach reveals, for the first time, a critical security vulnerability in the model’s temporal trajectory completion process. Validated across multiple open-source and commercial models, TFM achieves up to a 12% increase in attack success rate, highlighting the inadequacy of existing safety mechanisms in regulating self-generated temporal content.
📝 Abstract
Recent text-to-video (T2V) models can synthesize complex videos from lightweight natural language prompts, raising urgent concerns about safety alignment in the event of misuse in the real world. Prior jailbreak attacks typically rewrite unsafe prompts into paraphrases that evade content filters while preserving meaning. Yet, these approaches often still retain explicit sensitive cues in the input text and therefore overlook a more profound, video-specific weakness. In this paper, we identify a temporal trajectory infilling vulnerability of T2V systems under fragmented prompts: when the prompt specifies only sparse boundary conditions (e.g., start and end frames) and leaves the intermediate evolution underspecified, the model may autonomously reconstruct a plausible trajectory that includes harmful intermediate frames, despite the prompt appearing benign to input or output side filtering. Building on this observation, we propose TFM. This fragmented prompting framework converts an originally unsafe request into a temporally sparse two-frame extraction and further reduces overtly sensitive cues via implicit substitution. Extensive evaluations across multiple open-source and commercial T2V models demonstrate that TFM consistently enhances jailbreak effectiveness, achieving up to a 12% increase in attack success rate on commercial systems. Our findings highlight the need for temporally aware safety mechanisms that account for model-driven completion beyond prompt surface form.