π€ AI Summary
Existing diffusion-based virtual try-on methods rely on redundant encoders, explicit pose estimation, and complex preprocessing pipelines, resulting in excessive parameter counts and inefficient training/inference. This paper proposes CatVTONβa lightweight, purely concatenation-driven diffusion framework for virtual try-on. Instead of employing text/image encoders, cross-attention mechanisms, or explicit pose modeling, CatVTON directly spatially concatenates the person image and garment reference as the conditional input. It adopts a streamlined architecture combining a VAE with a simplified U-Net, enhanced via targeted fine-tuning of self-attention modules for efficient adaptation. With only 49.57M parameters, CatVTON achieves state-of-the-art performance trained solely on 73K public images. It outperforms all baselines quantitatively and qualitatively, demonstrates strong generalization to in-the-wild scenarios, requires no preprocessing during inference, and reduces GPU memory consumption by over 49%.
π Abstract
Virtual try-on methods based on diffusion models achieve realistic effects but often require additional encoding modules, a large number of training parameters, and complex preprocessing, which increases the burden on training and inference. In this work, we re-evaluate the necessity of additional modules and analyze how to improve training efficiency and reduce redundant steps in the inference process. Based on these insights, we propose CatVTON, a simple and efficient virtual try-on diffusion model that transfers in-shop or worn garments of arbitrary categories to target individuals by concatenating them along spatial dimensions as inputs of the diffusion model. The efficiency of CatVTON is reflected in three aspects: (1) Lightweight network. CatVTON consists only of a VAE and a simplified denoising UNet, removing redundant image and text encoders as well as cross-attentions, and includes just 899.06M parameters. (2) Parameter-efficient training. Through experimental analysis, we identify self-attention modules as crucial for adapting pre-trained diffusion models to the virtual try-on task, enabling high-quality results with only 49.57M training parameters. (3) Simplified inference. CatVTON eliminates unnecessary preprocessing, such as pose estimation, human parsing, and captioning, requiring only a person image and garment reference to guide the virtual try-on process, reducing over 49% memory usage compared to other diffusion-based methods. Extensive experiments demonstrate that CatVTON achieves superior qualitative and quantitative results compared to baseline methods and demonstrates strong generalization performance in in-the-wild scenarios, despite being trained solely on public datasets with 73K samples.