3DV-TON: Textured 3D-Guided Consistent Video Try-on via Diffusion Models

📅 2025-04-24
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing video virtual try-on methods struggle to simultaneously achieve high visual fidelity and temporal consistency under complex garment textures and dynamic human poses. This paper proposes the first diffusion-based, 3D-guided video virtual try-on framework. It employs animatable, textured explicit 3D meshes as frame-level dynamic priors; introduces an adaptive keyframe-driven 3D reconstruction and skinning animation pipeline; and incorporates a robust rectangular masking strategy to suppress inter-frame garment information leakage during motion. The method integrates dynamic texture mapping with keyframe guidance and is evaluated on HR-VVT—a newly constructed high-resolution video virtual try-on benchmark. Experiments demonstrate significant improvements in generation quality and temporal stability, effectively mitigating texture misalignment and motion jitter. Our approach outperforms state-of-the-art methods across multiple quantitative metrics.

Technology Category

Application Category

📝 Abstract
Video try-on replaces clothing in videos with target garments. Existing methods struggle to generate high-quality and temporally consistent results when handling complex clothing patterns and diverse body poses. We present 3DV-TON, a novel diffusion-based framework for generating high-fidelity and temporally consistent video try-on results. Our approach employs generated animatable textured 3D meshes as explicit frame-level guidance, alleviating the issue of models over-focusing on appearance fidelity at the expanse of motion coherence. This is achieved by enabling direct reference to consistent garment texture movements throughout video sequences. The proposed method features an adaptive pipeline for generating dynamic 3D guidance: (1) selecting a keyframe for initial 2D image try-on, followed by (2) reconstructing and animating a textured 3D mesh synchronized with original video poses. We further introduce a robust rectangular masking strategy that successfully mitigates artifact propagation caused by leaking clothing information during dynamic human and garment movements. To advance video try-on research, we introduce HR-VVT, a high-resolution benchmark dataset containing 130 videos with diverse clothing types and scenarios. Quantitative and qualitative results demonstrate our superior performance over existing methods. The project page is at this link https://2y7c3.github.io/3DV-TON/
Problem

Research questions and friction points this paper is trying to address.

Generating high-quality video try-on with complex clothing patterns
Ensuring temporal consistency across diverse body poses
Mitigating artifact propagation during dynamic garment movements
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses diffusion models for video try-on
Employs animatable textured 3D meshes
Introduces rectangular masking strategy
🔎 Similar Papers
No similar papers found.