🤖 AI Summary
Existing approaches typically treat garment generation and virtual try-on as disjoint tasks, often leading to entangled attributes and semantic interference under multi-source heterogeneous conditions, thereby failing to meet the demand for unified and controllable synthesis in real-world fashion workflows. To address this, this work proposes VersaVogue, a unified framework that introduces a novel trait-routing attention module based on a mixture-of-experts mechanism to enable disentangled injection of visual attributes. Furthermore, it establishes a multi-view preference optimization pipeline that requires no manual annotations and integrates Direct Preference Optimization (DPO) to enhance both realism and controllability. Experiments demonstrate that VersaVogue significantly outperforms current methods in both garment generation and virtual try-on, achieving state-of-the-art performance in visual fidelity, semantic consistency, and fine-grained controllability.
📝 Abstract
Diffusion models have driven remarkable advancements in fashion image generation, yet prior works usually treat garment generation and virtual dressing as separate problems, limiting their flexibility in real-world fashion workflows. Moreover, fashion image synthesis under multi-source heterogeneous conditions remains challenging, as existing methods typically rely on simple feature concatenation or static layer-wise injection, which often causes attribute entanglement and semantic interference. To address these issues, we propose VersaVogue, a unified framework for multi-condition controllable fashion synthesis that jointly supports garment generation and virtual dressing, corresponding to the design and showcase stages of the fashion lifecycle. Specifically, we introduce a trait-routing attention (TA) module that leverages a mixture-of-experts mechanism to dynamically route condition features to the most compatible experts and generative layers, enabling disentangled injection of visual attributes such as texture, shape, and color. To further improve realism and controllability, we develop an automated multi-perspective preference optimization (MPO) pipeline that constructs preference data without human annotation or task-specific reward models. By combining evaluators of content fidelity, textual alignment, and perceptual quality, MPO identifies reliable preference pairs, which are then used to optimize the model via direct preference optimization (DPO). Extensive experiments on both garment generation and virtual dressing benchmarks demonstrate that VersaVogue consistently outperforms existing methods in visual fidelity, semantic consistency, and fine-grained controllability.