🤖 AI Summary
Existing text-guided virtual try-on (VTON) frameworks suffer from two major bottlenecks: (1) the semantic gap between textual instructions and visual content, and (2) scarcity of diverse, complex-scenario training data. To address these challenges, we propose the first general-purpose VTON framework grounded in multimodal large language models (MLLMs). Our method introduces: (1) an MLLM-driven semantic alignment module that explicitly models fine-grained cross-modal correspondences via learnable query tokens; and (2) a two-stage progressive training strategy integrating a self-synthesized data pipeline with semantic alignment loss to mitigate few-shot generalization issues. The framework unifies support for diverse tasks—including multi-garment replacement and model-to-model transfer—without task-specific architectural modifications. Extensive experiments demonstrate state-of-the-art performance across multiple benchmarks, with significant improvements in both geometric accuracy and visual realism of generated try-on results.
📝 Abstract
Image-based virtual try-on (VTON) aims to synthesize photorealistic images of a person wearing specified garments. Despite significant progress, building a universal VTON framework that can flexibly handle diverse and complex tasks remains a major challenge. Recent methods explore multi-task VTON frameworks guided by textual instructions, yet they still face two key limitations: (1) semantic gap between text instructions and reference images, and (2) data scarcity in complex scenarios. To address these challenges, we propose UniFit, a universal VTON framework driven by a Multimodal Large Language Model (MLLM). Specifically, we introduce an MLLM-Guided Semantic Alignment Module (MGSA), which integrates multimodal inputs using an MLLM and a set of learnable queries. By imposing a semantic alignment loss, MGSA captures cross-modal semantic relationships and provides coherent and explicit semantic guidance for the generative process, thereby reducing the semantic gap. Moreover, by devising a two-stage progressive training strategy with a self-synthesis pipeline, UniFit is able to learn complex tasks from limited data. Extensive experiments show that UniFit not only supports a wide range of VTON tasks, including multi-garment and model-to-model try-on, but also achieves state-of-the-art performance. The source code and pretrained models are available at https://github.com/zwplus/UniFit.