DiTraj: training-free trajectory control for video diffusion transformer

πŸ“… 2025-09-25
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
Existing DiT-based video generation models either require extensive retraining or are constrained by U-Net architectures, limiting their controllability while preserving DiT’s performance advantages. This paper proposes a training-free trajectory control method for DiT models. Our approach decouples motion via foreground-background separation guidance and introduces a 3D-aware spatio-temporal disentangled RoPE (STD-RoPE) that dynamically modulates positional encoding density for foreground tokens. Furthermore, we enhance semantic-motion alignment through LLM-driven prompt decomposition and 3D full-attention analysis. To the best of our knowledge, this is the first method enabling high-precision, user-intent-driven trajectory control over DiT models without any fine-tuning. Experiments demonstrate significant improvements over state-of-the-art approaches in both trajectory accuracy and video quality, achieving an effective balance between efficiency and controllability.

Technology Category

Application Category

πŸ“ Abstract
Diffusion Transformers (DiT)-based video generation models with 3D full attention exhibit strong generative capabilities. Trajectory control represents a user-friendly task in the field of controllable video generation. However, existing methods either require substantial training resources or are specifically designed for U-Net, do not take advantage of the superior performance of DiT. To address these issues, we propose DiTraj, a simple but effective training-free framework for trajectory control in text-to-video generation, tailored for DiT. Specifically, first, to inject the object's trajectory, we propose foreground-background separation guidance: we use the Large Language Model (LLM) to convert user-provided prompts into foreground and background prompts, which respectively guide the generation of foreground and background regions in the video. Then, we analyze 3D full attention and explore the tight correlation between inter-token attention scores and position embedding. Based on this, we propose inter-frame Spatial-Temporal Decoupled 3D-RoPE (STD-RoPE). By modifying only foreground tokens' position embedding, STD-RoPE eliminates their cross-frame spatial discrepancies, strengthening cross-frame attention among them and thus enhancing trajectory control. Additionally, we achieve 3D-aware trajectory control by regulating the density of position embedding. Extensive experiments demonstrate that our method outperforms previous methods in both video quality and trajectory controllability.
Problem

Research questions and friction points this paper is trying to address.

Training-free trajectory control for video diffusion transformers
Enhancing object motion guidance without model retraining
Improving cross-frame attention through spatial-temporal position embedding
Innovation

Methods, ideas, or system contributions that make the work stand out.

Foreground-background separation guidance using LLM
Inter-frame Spatial-Temporal Decoupled 3D-RoPE
Modifying position embedding for trajectory control
πŸ”Ž Similar Papers
No similar papers found.
C
Cheng Lei
Beijing University of Posts and Telecommunications
J
Jiayu Zhang
Lenovo
Yue Ma
Yue Ma
Bytedance
NLPDialogue SystemLLM
X
Xinyu Wang
Tsinghua University
L
Long Chen
Lenovo
Liang Tang
Liang Tang
Google
Reinforcement LearningRecommender SystemPersonalizationComputational AdvertisingAds Quality
Yiqiang Yan
Yiqiang Yan
Lenovo
F
Fei Su
Beijing University of Posts and Telecommunications
Zhicheng Zhao
Zhicheng Zhao
Associate Professor at the School of Artificial Intelligence, Anhui University
Computer Vision