Tora: Trajectory-oriented Diffusion Transformer for Video Generation

📅 2024-07-31
🏛️ arXiv.org
📈 Citations: 21
Influential: 0
📄 PDF
🤖 AI Summary
Existing diffusion models remain limited in controllable motion video generation. This paper introduces Traj-DiT, the first trajectory-guided diffusion Transformer framework that unifies text, visual, and motion trajectory conditioning to generate high-fidelity, physically consistent dynamic videos. Our method features: (1) a lightweight trajectory extractor enabling arbitrary trajectory encoding; (2) a motion-guided fusion mechanism for precise spatiotemporal motion patch injection; and (3) a spatial-temporal DiT architecture leveraging 3D motion compression and hierarchical patch encoding. Experiments demonstrate that Traj-DiT significantly outperforms baseline DiT models in motion fidelity, multi-resolution adaptability, long-sequence modeling, and synthesis of complex physical motions. The code is publicly available.

Technology Category

Application Category

📝 Abstract
Recent advancements in Diffusion Transformer (DiT) have demonstrated remarkable proficiency in producing high-quality video content. Nonetheless, the potential of transformer-based diffusion models for effectively generating videos with controllable motion remains an area of limited exploration. This paper introduces Tora, the first trajectory-oriented DiT framework that concurrently integrates textual, visual, and trajectory conditions, thereby enabling scalable video generation with effective motion guidance. Specifically, Tora consists of a Trajectory Extractor (TE), a Spatial-Temporal DiT, and a Motion-guidance Fuser (MGF). The TE encodes arbitrary trajectories into hierarchical spacetime motion patches with a 3D motion compression network. The MGF integrates the motion patches into the DiT blocks to generate consistent videos that accurately follow designated trajectories. Our design aligns seamlessly with DiT's scalability, allowing precise control of video content's dynamics with diverse durations, aspect ratios, and resolutions. Extensive experiments demonstrate that Tora excels in achieving high motion fidelity compared to the foundational DiT model, while also accurately simulating the complex movements of the physical world. Code is made available at https://github.com/alibaba/Tora .
Problem

Research questions and friction points this paper is trying to address.

Enables scalable video generation with motion guidance.
Integrates textual, visual, and trajectory conditions effectively.
Achieves high motion fidelity and simulates complex movements.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Trajectory-oriented DiT framework for video generation
Integrates textual, visual, and trajectory conditions
Uses Trajectory Extractor and Motion-guidance Fuser
🔎 Similar Papers