Limb-Aware Virtual Try-On Network With Progressive Clothing Warping

📅 2025-03-18
🏛️ IEEE transactions on multimedia
📈 Citations: 4
Influential: 0
📄 PDF
🤖 AI Summary
Virtual try-on faces two key challenges: global garment deformation causing appearance distortion, and insufficient limb texture modeling leading to blurry limb details. To address these, we propose a Progressive Clothing Warping (PCW) framework incorporating gravity-aware geometric loss for fine-grained, physically plausible garment deformation. We further introduce human-structure constraints guided by non-limb target parsing maps to enhance pose and structural consistency. Additionally, we design a Limb-Aware Texture Fusion (LTF) module that leverages attention mechanisms to integrate source subject limb textures, significantly improving limb realism. Our method jointly exploits semantic human parsing, multi-stage spatial alignment, and combined perceptual–geometric losses. Extensive experiments demonstrate state-of-the-art performance across multiple benchmarks, with notable improvements in limb sharpness, edge naturalness, and garment fit fidelity.

Technology Category

Application Category

📝 Abstract
Image-based virtual try-on aims to transfer an in-shop clothing image to a person image. Most existing methods adopt a single global deformation to perform clothing warping directly, which lacks fine-grained modeling of in-shop clothing and leads to distorted clothing appearance. In addition, existing methods usually fail to generate limb details well because they are limited by the used clothing-agnostic person representation without referring to the limb textures of the person image. To address these problems, we propose Limb-aware Virtual Try-on Network named PL-VTON, which performs fine-grained clothing warping progressively and generates high-quality try-on results with realistic limb details. Specifically, we present Progressive Clothing Warping (PCW) that explicitly models the location and size of in-shop clothing and utilizes a two-stage alignment strategy to progressively align the in-shop clothing with the human body. Moreover, a novel gravity-aware loss that considers the fit of the person wearing clothing is adopted to better handle the clothing edges. Then, we design Person Parsing Estimator (PPE) with a non-limb target parsing map to semantically divide the person into various regions, which provides structural constraints on the human body and therefore alleviates texture bleeding between clothing and body regions. Finally, we introduce Limb-aware Texture Fusion (LTF) that focuses on generating realistic details in limb regions, where a coarse try-on result is first generated by fusing the warped clothing image with the person image, then limb textures are further fused with the coarse result under limb-aware guidance to refine limb details. Extensive experiments demonstrate that our PL-VTON outperforms the state-of-the-art methods both qualitatively and quantitatively.
Problem

Research questions and friction points this paper is trying to address.

Improves fine-grained clothing warping for virtual try-on.
Generates realistic limb details in try-on results.
Addresses texture bleeding between clothing and body regions.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Progressive Clothing Warping for precise alignment
Gravity-aware loss improves clothing edge handling
Limb-aware Texture Fusion enhances realistic details
🔎 Similar Papers
No similar papers found.