Eevee: Towards Close-up High-resolution Video-based Virtual Try-on

📅 2025-11-24
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Current video-based virtual try-on methods face two key bottlenecks: (1) reliance on single-view garment images leads to severe texture distortion, and (2) generated outputs are limited to full-body shots, failing to meet e-commerce demands for high-fidelity close-up visualization. To address these limitations, we introduce the first high-resolution video virtual try-on dataset supporting both full-body and close-up views, featuring real-world multi-view try-on videos and corresponding fine-grained textual descriptions. We further propose the Video Garment Inception Distance (VGID), a novel metric quantifying consistency preservation of garment texture and structural geometry. Comprehensive benchmarking on this dataset reveals substantial deficiencies in existing methods regarding detail fidelity. Experimental results demonstrate that our dataset significantly improves texture fidelity and structural accuracy in both close-up and full-body scenarios, establishing critical infrastructure and standardized evaluation protocols for high-fidelity video virtual try-on research.

Technology Category

Application Category

📝 Abstract
Video virtual try-on technology provides a cost-effective solution for creating marketing videos in fashion e-commerce. However, its practical adoption is hindered by two critical limitations. First, the reliance on a single garment image as input in current virtual try-on datasets limits the accurate capture of realistic texture details. Second, most existing methods focus solely on generating full-shot virtual try-on videos, neglecting the business's demand for videos that also provide detailed close-ups. To address these challenges, we introduce a high-resolution dataset for video-based virtual try-on. This dataset offers two key features. First, it provides more detailed information on the garments, which includes high-fidelity images with detailed close-ups and textual descriptions; Second, it uniquely includes full-shot and close-up try-on videos of real human models. Furthermore, accurately assessing consistency becomes significantly more critical for the close-up videos, which demand high-fidelity preservation of garment details. To facilitate such fine-grained evaluation, we propose a new garment consistency metric VGID (Video Garment Inception Distance) that quantifies the preservation of both texture and structure. Our experiments validate these contributions. We demonstrate that by utilizing the detailed images from our dataset, existing video generation models can extract and incorporate texture features, significantly enhancing the realism and detail fidelity of virtual try-on results. Furthermore, we conduct a comprehensive benchmark of recent models. The benchmark effectively identifies the texture and structural preservation problems among current methods.
Problem

Research questions and friction points this paper is trying to address.

Limited garment detail capture from single input images
Lack of close-up video generation for detailed views
Inadequate evaluation metrics for garment consistency preservation
Innovation

Methods, ideas, or system contributions that make the work stand out.

High-resolution dataset with detailed garment images
Includes full-shot and close-up try-on videos
Proposes VGID metric for garment consistency evaluation
🔎 Similar Papers
No similar papers found.
J
Jianhao Zeng
Amap, Alibaba Group
Y
Yancheng Bai
Amap, Alibaba Group
R
Ruidong Chen
Amap, Alibaba Group, Tianjin University
X
Xuanpu Zhang
Tianjin University
L
Lei Sun
Amap, Alibaba Group
D
Dongyang Jin
Amap, Alibaba Group
R
Ryan Xu
Amap, Alibaba Group
Nannan Zhang
Nannan Zhang
research scientist, NIH
Dan Song
Dan Song
Tianjin University
X
Xiangxiang Chu
Amap, Alibaba Group