DreamVVT: Mastering Realistic Video Virtual Try-On in the Wild via a Stage-Wise Diffusion Transformer Framework

📅 2025-08-04
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing video virtual try-on (VVT) methods are constrained by scarce paired training data, struggling to simultaneously preserve fine-grained garment texture fidelity and long-term temporal consistency. This paper proposes DreamVVT, a two-stage diffusion Transformer-based framework. In Stage I, key-frame guidance enables explicit decoupling of appearance and motion modeling. In Stage II, skeleton-driven animation, multi-frame temporal modeling, and LoRA-finetuned vision-language models jointly enable generalizable generation without paired data. Our key innovation lies in integrating Diffusion Transformers with motion-appearance disentanglement, ensuring high-fidelity dynamic details and inter-frame consistency even under unseen poses or viewpoints. Extensive experiments demonstrate that DreamVVT significantly outperforms state-of-the-art methods on real-world benchmarks, enabling semantic-controllable, detail-rich, and temporally coherent high-quality VVT video synthesis.

Technology Category

Application Category

📝 Abstract
Video virtual try-on (VVT) technology has garnered considerable academic interest owing to its promising applications in e-commerce advertising and entertainment. However, most existing end-to-end methods rely heavily on scarce paired garment-centric datasets and fail to effectively leverage priors of advanced visual models and test-time inputs, making it challenging to accurately preserve fine-grained garment details and maintain temporal consistency in unconstrained scenarios. To address these challenges, we propose DreamVVT, a carefully designed two-stage framework built upon Diffusion Transformers (DiTs), which is inherently capable of leveraging diverse unpaired human-centric data to enhance adaptability in real-world scenarios. To further leverage prior knowledge from pretrained models and test-time inputs, in the first stage, we sample representative frames from the input video and utilize a multi-frame try-on model integrated with a vision-language model (VLM), to synthesize high-fidelity and semantically consistent keyframe try-on images. These images serve as complementary appearance guidance for subsequent video generation. extbf{In the second stage}, skeleton maps together with fine-grained motion and appearance descriptions are extracted from the input content, and these along with the keyframe try-on images are then fed into a pretrained video generation model enhanced with LoRA adapters. This ensures long-term temporal coherence for unseen regions and enables highly plausible dynamic motions. Extensive quantitative and qualitative experiments demonstrate that DreamVVT surpasses existing methods in preserving detailed garment content and temporal stability in real-world scenarios. Our project page https://virtu-lab.github.io/
Problem

Research questions and friction points this paper is trying to address.

Preserving fine-grained garment details in videos
Maintaining temporal consistency in unconstrained scenarios
Leveraging unpaired data for real-world adaptability
Innovation

Methods, ideas, or system contributions that make the work stand out.

Two-stage Diffusion Transformer framework
Multi-frame try-on with vision-language model
LoRA-enhanced video generation model
🔎 Similar Papers
No similar papers found.
T
Tongchun Zuo
ByteDance Intelligent Creation
Z
Zaiyu Huang
ByteDance Intelligent Creation
Shuliang Ning
Shuliang Ning
The Chinese University of HongKong, Shenzhen
Image GenerationVideo Generation
E
Ente Lin
Shenzhen International Graduate School, Tsinghua University
C
Chao Liang
ByteDance Intelligent Creation
Zerong Zheng
Zerong Zheng
Bytedance
Computer VisionComputer Graphics
J
Jianwen Jiang
ByteDance Intelligent Creation
Y
Yuan Zhang
ByteDance Intelligent Creation
Mingyuan Gao
Mingyuan Gao
Professor, Institute of Chemistry, Chinese Academy of Sciences
X
Xin Dong
ByteDance Intelligent Creation