Once Is Enough: Lightweight DiT-Based Video Virtual Try-On via One-Time Garment Appearance Injection

📅 2025-10-08
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Video virtual try-on faces challenges in Diffusion Transformer (DiT) models due to the parameter-heavy dual-branch architecture and difficulties in temporal modeling: the garment reference branch requires backbone modification, introducing numerous trainable parameters, while its latent features lack intrinsic temporal structure, necessitating additional learning. This paper proposes a one-time appearance injection method based on first-frame garment replacement—editing only the initial frame via an image-based try-on model, then guiding the DiT with pose and mask sequences for full-video generation, eliminating per-frame garment feature injection. Our approach achieves, for the first time, full-video coherent synthesis driven solely by single-frame editing. It maintains state-of-the-art visual quality while significantly reducing parameter count and computational overhead. Experiments demonstrate superior efficiency and performance compared to existing dual-branch diffusion methods.

Technology Category

Application Category

📝 Abstract
Video virtual try-on aims to replace the clothing of a person in a video with a target garment. Current dual-branch architectures have achieved significant success in diffusion models based on the U-Net; however, adapting them to diffusion models built upon the Diffusion Transformer remains challenging. Initially, introducing latent space features from the garment reference branch requires adding or modifying the backbone network, leading to a large number of trainable parameters. Subsequently, the latent space features of garments lack inherent temporal characteristics and thus require additional learning. To address these challenges, we propose a novel approach, OIE (Once is Enough), a virtual try-on strategy based on first-frame clothing replacement: specifically, we employ an image-based clothing transfer model to replace the clothing in the initial frame, and then, under the content control of the edited first frame, utilize pose and mask information to guide the temporal prior of the video generation model in synthesizing the remaining frames sequentially. Experiments show that our method achieves superior parameter efficiency and computational efficiency while still maintaining leading performance under these constraints.
Problem

Research questions and friction points this paper is trying to address.

Adapting dual-branch architectures to Diffusion Transformers for video virtual try-on
Reducing trainable parameters in garment feature integration for efficiency
Generating temporally consistent video frames using first-frame replacement guidance
Innovation

Methods, ideas, or system contributions that make the work stand out.

One-time garment injection via first-frame replacement
Pose and mask guidance for temporal video synthesis
Parameter-efficient Diffusion Transformer adaptation strategy
🔎 Similar Papers
No similar papers found.
Y
Yanjie Pan
School of computer science, Shanghai key laboratory of data science, Fudan University, China
Qingdong He
Qingdong He
Tencent Youtu Lab
Computer visionGenerative AI3D Vision
L
Lidong Wang
School of computer science, Shanghai key laboratory of data science, Fudan University, China
B
Bo Peng
Shanghai Ocean University, China
Mingmin Chi
Mingmin Chi
Fudan University
Data scienceBig dataRemote sensingFinanceMachine learning