Streaming Drag-Oriented Interactive Video Manipulation: Drag Anything, Anytime!

📅 2025-10-03
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Autoregressive video diffusion models lack streaming, fine-grained interactive control—particularly for real-time, user-guided object manipulation during generation. Method: We propose DragStream, a training-free streaming drag-editing framework. It introduces (1) REVEL—a novel task paradigm enabling real-time dragging of arbitrary objects during generation, unifying translation, deformation, and rotation; (2) latent-space adaptive distribution self-correction coupled with spatial-frequency selective optimization to jointly suppress latent drift and cross-frame contextual interference; and (3) neighboring-frame contextual modeling to ensure temporal coherence. Contribution/Results: DragStream is plug-and-play compatible with diverse autoregressive video diffusion models. It significantly improves user-intent fidelity and visual naturalness in long-sequence editing, achieving the first stable, controllable streaming video drag editing—without requiring model retraining or architectural modification.

Technology Category

Application Category

📝 Abstract
Achieving streaming, fine-grained control over the outputs of autoregressive video diffusion models remains challenging, making it difficult to ensure that they consistently align with user expectations. To bridge this gap, we propose extbf{stReaming drag-oriEnted interactiVe vidEo manipuLation (REVEL)}, a new task that enables users to modify generated videos emph{anytime} on emph{anything} via fine-grained, interactive drag. Beyond DragVideo and SG-I2V, REVEL unifies drag-style video manipulation as editing and animating video frames with both supporting user-specified translation, deformation, and rotation effects, making drag operations versatile. In resolving REVEL, we observe: emph{i}) drag-induced perturbations accumulate in latent space, causing severe latent distribution drift that halts the drag process; emph{ii}) streaming drag is easily disturbed by context frames, thereby yielding visually unnatural outcomes. We thus propose a training-free approach, extbf{DragStream}, comprising: emph{i}) an adaptive distribution self-rectification strategy that leverages neighboring frames' statistics to effectively constrain the drift of latent embeddings; emph{ii}) a spatial-frequency selective optimization mechanism, allowing the model to fully exploit contextual information while mitigating its interference via selectively propagating visual cues along generation. Our method can be seamlessly integrated into existing autoregressive video diffusion models, and extensive experiments firmly demonstrate the effectiveness of our DragStream.
Problem

Research questions and friction points this paper is trying to address.

Achieving fine-grained control over autoregressive video diffusion models
Addressing latent distribution drift caused by streaming drag operations
Mitigating context frame interference in interactive video manipulation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Adaptive distribution self-rectification strategy constrains latent drift
Spatial-frequency selective optimization mitigates contextual interference
Training-free approach integrates into autoregressive video diffusion models
🔎 Similar Papers
No similar papers found.