🤖 AI Summary
This work addresses the challenge of balancing high-fidelity rendering and inference efficiency in virtual try-on by formulating it as a structured image editing task, with emphasis on preserving the subject’s structure, faithfully transferring garment textures, and achieving seamless fusion. The authors propose a prompt-based unified try-on framework that leverages a flow-matching Diffusion Transformer (DiT) as the backbone, enhanced by latent multimodal condition concatenation and a self-reference mechanism. This design enables significant acceleration of inference while maintaining high visual fidelity. Evaluated on standard benchmarks, the method outperforms existing virtual try-on and general-purpose image editing models, achieving an exceptional trade-off between realism and computational efficiency.
📝 Abstract
Virtual Try-on (VTON) has become a core capability for online retail, where realistic try-on results provide reliable fit guidance, reduce returns, and benefit both consumers and merchants. Diffusion-based VTON methods achieve photorealistic synthesis, yet often rely on intricate architectures such as auxiliary reference networks and suffer from slow sampling, making the trade-off between fidelity and efficiency a persistent challenge. We approach VTON as a structured image editing problem that demands strong conditional generation under three key requirements: subject preservation, faithful texture transfer, and seamless harmonization. Under this perspective, our training framework is generic and transfers to broader image editing tasks. Moreover, the paired data produced by VTON constitutes a rich supervisory resource for training general-purpose editors. We present PROMO, a promptable virtual try-on framework built upon a Flow Matching DiT backbone with latent multi-modal conditional concatenation. By leveraging conditioning efficiency and self-reference mechanisms, our approach substantially reduces inference overhead. On standard benchmarks, PROMO surpasses both prior VTON methods and general image editing models in visual fidelity while delivering a competitive balance between quality and speed. These results demonstrate that flow-matching transformers, coupled with latent multi-modal conditioning and self-reference acceleration, offer an effective and training-efficient solution for high-quality virtual try-on.