🤖 AI Summary
Existing DiT-based video virtual try-on methods struggle to simultaneously model fine-grained garment dynamics and ensure inter-frame background consistency, while their interactive modules incur high computational overhead and are hampered by small-scale, low-quality datasets.
Method: We propose KeyTailor—a novel framework featuring instruction-guided keyframe sampling that injects dynamic and consistency priors from keyframes into a standard DiT without architectural modification—and two dedicated modules: Garment Detail Enhancement and Collaborative Background Optimization, enabling multi-condition latent-space fusion (pose, mask, noise, keyframe).
Contribution/Results: We introduce ViT-HD, the first large-scale, high-definition video virtual try-on dataset (15,070 clips, 810×1080 resolution). Experiments demonstrate state-of-the-art performance across both dynamic and static scenarios, with significant improvements in garment texture fidelity and background integrity, 32% faster inference, and 2.1× accelerated training convergence.
📝 Abstract
Although diffusion transformer (DiT)-based video virtual try-on (VVT) has made significant progress in synthesizing realistic videos, existing methods still struggle to capture fine-grained garment dynamics and preserve background integrity across video frames. They also incur high computational costs due to additional interaction modules introduced into DiTs, while the limited scale and quality of existing public datasets also restrict model generalization and effective training. To address these challenges, we propose a novel framework, KeyTailor, along with a large-scale, high-definition dataset, ViT-HD. The core idea of KeyTailor is a keyframe-driven details injection strategy, motivated by the fact that keyframes inherently contain both foreground dynamics and background consistency. Specifically, KeyTailor adopts an instruction-guided keyframe sampling strategy to filter informative frames from the input video. Subsequently,two tailored keyframe-driven modules, the garment details enhancement module and the collaborative background optimization module, are employed to distill garment dynamics into garment-related latents and to optimize the integrity of background latents, both guided by keyframes.These enriched details are then injected into standard DiT blocks together with pose, mask, and noise latents, enabling efficient and realistic try-on video synthesis. This design ensures consistency without explicitly modifying the DiT architecture, while simultaneously avoiding additional complexity. In addition, our dataset ViT-HD comprises 15, 070 high-quality video samples at a resolution of 810*1080, covering diverse garments. Extensive experiments demonstrate that KeyTailor outperforms state-of-the-art baselines in terms of garment fidelity and background integrity across both dynamic and static scenarios.