π€ AI Summary
Precise alignment between garments and human bodies under pose and appearance variations remains challenging in virtual try-on. This paper proposes Voostβthe first bidirectional joint learning framework that unifies virtual try-on and try-off within a single diffusion-based Transformer architecture, without task-specific networks or additional annotations. To enhance robustness against resolution variations, mask perturbations, and cross-pose deformations, Voost introduces three key innovations: bidirectional consistency supervision, attention temperature scaling, and self-correcting sampling. Extensive experiments on benchmarks including VITON-HD and DC-VTON+ demonstrate that Voost achieves state-of-the-art performance in alignment accuracy, visual realism, and cross-domain generalization. These results validate the effectiveness of bidirectional modeling for fine-grained human-garment relational learning, establishing a new paradigm for unified garment manipulation.
π Abstract
Virtual try-on aims to synthesize a realistic image of a person wearing a target garment, but accurately modeling garment-body correspondence remains a persistent challenge, especially under pose and appearance variation. In this paper, we propose Voost - a unified and scalable framework that jointly learns virtual try-on and try-off with a single diffusion transformer. By modeling both tasks jointly, Voost enables each garment-person pair to supervise both directions and supports flexible conditioning over generation direction and garment category, enhancing garment-body relational reasoning without task-specific networks, auxiliary losses, or additional labels. In addition, we introduce two inference-time techniques: attention temperature scaling for robustness to resolution or mask variation, and self-corrective sampling that leverages bidirectional consistency between tasks. Extensive experiments demonstrate that Voost achieves state-of-the-art results on both try-on and try-off benchmarks, consistently outperforming strong baselines in alignment accuracy, visual fidelity, and generalization.