UniFit: Towards Universal Virtual Try-on with MLLM-Guided Semantic Alignment

📅 2025-11-19
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing text-guided virtual try-on (VTON) frameworks suffer from two major bottlenecks: (1) the semantic gap between textual instructions and visual content, and (2) scarcity of diverse, complex-scenario training data. To address these challenges, we propose the first general-purpose VTON framework grounded in multimodal large language models (MLLMs). Our method introduces: (1) an MLLM-driven semantic alignment module that explicitly models fine-grained cross-modal correspondences via learnable query tokens; and (2) a two-stage progressive training strategy integrating a self-synthesized data pipeline with semantic alignment loss to mitigate few-shot generalization issues. The framework unifies support for diverse tasks—including multi-garment replacement and model-to-model transfer—without task-specific architectural modifications. Extensive experiments demonstrate state-of-the-art performance across multiple benchmarks, with significant improvements in both geometric accuracy and visual realism of generated try-on results.

Technology Category

Application Category

📝 Abstract
Image-based virtual try-on (VTON) aims to synthesize photorealistic images of a person wearing specified garments. Despite significant progress, building a universal VTON framework that can flexibly handle diverse and complex tasks remains a major challenge. Recent methods explore multi-task VTON frameworks guided by textual instructions, yet they still face two key limitations: (1) semantic gap between text instructions and reference images, and (2) data scarcity in complex scenarios. To address these challenges, we propose UniFit, a universal VTON framework driven by a Multimodal Large Language Model (MLLM). Specifically, we introduce an MLLM-Guided Semantic Alignment Module (MGSA), which integrates multimodal inputs using an MLLM and a set of learnable queries. By imposing a semantic alignment loss, MGSA captures cross-modal semantic relationships and provides coherent and explicit semantic guidance for the generative process, thereby reducing the semantic gap. Moreover, by devising a two-stage progressive training strategy with a self-synthesis pipeline, UniFit is able to learn complex tasks from limited data. Extensive experiments show that UniFit not only supports a wide range of VTON tasks, including multi-garment and model-to-model try-on, but also achieves state-of-the-art performance. The source code and pretrained models are available at https://github.com/zwplus/UniFit.
Problem

Research questions and friction points this paper is trying to address.

Building a universal virtual try-on framework for diverse complex tasks
Addressing semantic gap between text instructions and reference images
Overcoming data scarcity in complex virtual try-on scenarios
Innovation

Methods, ideas, or system contributions that make the work stand out.

MLLM-guided semantic alignment module reduces gaps
Two-stage progressive training handles data scarcity
Learnable queries capture cross-modal semantic relationships
W
Wei Zhang
School of Cyber Science and Engineering, Nanjing University of Science and Technology
Yeying Jin
Yeying Jin
Tencent | National University of Singapore
Computer VisionAIGCGenAIMLLMVLM
X
Xin Li
University of Science and Technology of China
Y
Yan Zhang
ByteDance
Xiaofeng Cong
Xiaofeng Cong
Southeast University
Image DehazingGenerative AlgorithmsImage Restoration
C
Cong Wang
University of California, San Francisco
F
Fengcai Qiao
National University of Defense Technology
Z
Zhichao Lian
School of Cyber Science and Engineering, Nanjing University of Science and Technology