🤖 AI Summary
Video virtual try-on faces challenges in Diffusion Transformer (DiT) models due to the parameter-heavy dual-branch architecture and difficulties in temporal modeling: the garment reference branch requires backbone modification, introducing numerous trainable parameters, while its latent features lack intrinsic temporal structure, necessitating additional learning. This paper proposes a one-time appearance injection method based on first-frame garment replacement—editing only the initial frame via an image-based try-on model, then guiding the DiT with pose and mask sequences for full-video generation, eliminating per-frame garment feature injection. Our approach achieves, for the first time, full-video coherent synthesis driven solely by single-frame editing. It maintains state-of-the-art visual quality while significantly reducing parameter count and computational overhead. Experiments demonstrate superior efficiency and performance compared to existing dual-branch diffusion methods.
📝 Abstract
Video virtual try-on aims to replace the clothing of a person in a video with a target garment. Current dual-branch architectures have achieved significant success in diffusion models based on the U-Net; however, adapting them to diffusion models built upon the Diffusion Transformer remains challenging. Initially, introducing latent space features from the garment reference branch requires adding or modifying the backbone network, leading to a large number of trainable parameters. Subsequently, the latent space features of garments lack inherent temporal characteristics and thus require additional learning. To address these challenges, we propose a novel approach, OIE (Once is Enough), a virtual try-on strategy based on first-frame clothing replacement: specifically, we employ an image-based clothing transfer model to replace the clothing in the initial frame, and then, under the content control of the edited first frame, utilize pose and mask information to guide the temporal prior of the video generation model in synthesizing the remaining frames sequentially. Experiments show that our method achieves superior parameter efficiency and computational efficiency while still maintaining leading performance under these constraints.