Playmate2: Training-Free Multi-Character Audio-Driven Animation via Diffusion Transformer with Reward Feedback

📅 2025-10-13
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing audio-driven video generation methods suffer from critical limitations in lip-sync accuracy, long-term temporal coherence, and multi-character coordinated animation. To address these, we propose Mask-CFG—a training-free framework integrating positional offset inference, LoRA-based efficient fine-tuning, reward-guided optimization, and block-wise diffusion Transformer (DiT) inference—enabling high-fidelity, long-duration, natural dialogue video synthesis for arbitrary numbers of characters without architectural modification or domain-specific data. Our core innovation lies in decoupling character control from speech-driven motion via masked classifier-free guidance and dynamic reward calibration. This significantly improves lip-sync accuracy (+12.7% LSE), temporal consistency (+38.4% TCV), and multi-character motion naturalness. On multi-character benchmarks, Mask-CFG surpasses state-of-the-art methods while maintaining high fidelity, low inference cost, and strong generalization.

Technology Category

Application Category

📝 Abstract
Recent advances in diffusion models have significantly improved audio-driven human video generation, surpassing traditional methods in both quality and controllability. However, existing approaches still face challenges in lip-sync accuracy, temporal coherence for long video generation, and multi-character animation. In this work, we propose a diffusion transformer (DiT)-based framework for generating lifelike talking videos of arbitrary length, and introduce a training-free method for multi-character audio-driven animation. First, we employ a LoRA-based training strategy combined with a position shift inference approach, which enables efficient long video generation while preserving the capabilities of the foundation model. Moreover, we combine partial parameter updates with reward feedback to enhance both lip synchronization and natural body motion. Finally, we propose a training-free approach, Mask Classifier-Free Guidance (Mask-CFG), for multi-character animation, which requires no specialized datasets or model modifications and supports audio-driven animation for three or more characters. Experimental results demonstrate that our method outperforms existing state-of-the-art approaches, achieving high-quality, temporally coherent, and multi-character audio-driven video generation in a simple, efficient, and cost-effective manner.
Problem

Research questions and friction points this paper is trying to address.

Achieving accurate lip-sync and natural body motion in audio-driven animation
Generating temporally coherent long videos with multi-character interactions
Enabling training-free multi-character animation without dataset or model modifications
Innovation

Methods, ideas, or system contributions that make the work stand out.

Diffusion transformer framework for lifelike talking videos
Training-free multi-character animation with Mask-CFG
LoRA training with reward feedback enhances synchronization
🔎 Similar Papers
No similar papers found.
X
Xingpei Ma
Guangzhou Quwan Network Technology
S
Shenneng Huang
Guangzhou Quwan Network Technology
J
Jiaran Cai
Guangzhou Quwan Network Technology
Y
Yuansheng Guan
Guangzhou Quwan Network Technology
Shen Zheng
Shen Zheng
Research Scientist, Bytedance Seed
Large Language Model
H
Hanfeng Zhao
Guangzhou Quwan Network Technology
Q
Qiang Zhang
Guangzhou Quwan Network Technology
S
Shunsi Zhang
Guangzhou Quwan Network Technology