Tora2: Motion and Appearance Customized Diffusion Transformer for Multi-Entity Video Generation

📅 2025-07-08
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Addressing the challenges of joint appearance-motion customization, fine-grained detail preservation, and insufficient alignment across multimodal conditions (text, motion trajectories, and visual inputs) in multi-entity video generation, this paper introduces the first diffusion Transformer framework enabling simultaneous customization of appearance and motion for multiple entities. Our method features: (1) a decoupled personalized representation extractor for high-fidelity open-set entity modeling; (2) a gated self-attention mechanism that dynamically fuses textual, trajectory-based, and visual conditioning signals; and (3) explicit motion–appearance embedding mapping coupled with contrastive loss to enforce cross-modal consistency. Evaluated on multi-entity customized video generation, our approach achieves state-of-the-art performance, significantly improving motion controllability, visual fidelity, and multimodal alignment accuracy.

Technology Category

Application Category

📝 Abstract
Recent advances in diffusion transformer models for motion-guided video generation, such as Tora, have shown significant progress. In this paper, we present Tora2, an enhanced version of Tora, which introduces several design improvements to expand its capabilities in both appearance and motion customization. Specifically, we introduce a decoupled personalization extractor that generates comprehensive personalization embeddings for multiple open-set entities, better preserving fine-grained visual details compared to previous methods. Building on this, we design a gated self-attention mechanism to integrate trajectory, textual description, and visual information for each entity. This innovation significantly reduces misalignment in multimodal conditioning during training. Moreover, we introduce a contrastive loss that jointly optimizes trajectory dynamics and entity consistency through explicit mapping between motion and personalization embeddings. Tora2 is, to our best knowledge, the first method to achieve simultaneous multi-entity customization of appearance and motion for video generation. Experimental results demonstrate that Tora2 achieves competitive performance with state-of-the-art customization methods while providing advanced motion control capabilities, which marks a critical advancement in multi-condition video generation. Project page: https://github.com/alibaba/Tora .
Problem

Research questions and friction points this paper is trying to address.

Enhancing multi-entity video generation with customized appearance and motion
Reducing misalignment in multimodal conditioning during training
Achieving simultaneous multi-entity customization for video generation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Decoupled personalization extractor for multi-entity embeddings
Gated self-attention integrates trajectory, text, and visuals
Contrastive loss optimizes motion and entity consistency
🔎 Similar Papers
No similar papers found.