🤖 AI Summary
This study addresses the lack of a systematic benchmark for evaluating general-purpose multi-reference image editing models on virtual try-on (VTON) tasks, which hinders robust assessment of their robustness and generalization in complex real-world scenarios. To this end, we introduce VTEdit-Bench, a comprehensive evaluation benchmark comprising 24,220 test image pairs across five progressively challenging task categories. We further propose VTEdit-QA, a vision-language model-based reference-aware evaluator that enables automatic, multi-dimensional assessment along three axes: model consistency, garment fidelity, and overall image quality. Experiments reveal that leading general-purpose editing models match specialized VTON methods on standard tasks and exhibit more stable generalization under high-difficulty conditions, yet still struggle with complex setups such as multiple clothing references. This work establishes the first systematic evaluation framework for VTON tailored to general-purpose image editing models.
📝 Abstract
As virtual try-on (VTON) continues to advance, a growing number of real-world scenarios have emerged, pushing beyond the ability of the existing specialized VTON models. Meanwhile, universal multi-reference image editing models have progressed rapidly and exhibit strong generalization in visual editing, suggesting a promising route toward more flexible VTON systems. However, despite their strong capabilities, the strengths and limitations of universal editors for VTON remain insufficiently explored due to the lack of systematic evaluation benchmarks. To address this gap, we introduce VTEdit-Bench, a comprehensive benchmark designed to evaluate universal multi-reference image editing models across various realistic VTON scenarios. VTEdit-Bench contains 24,220 test image pairs spanning five representative VTON tasks with progressively increasing complexity, enabling systematic analysis of robustness and generalization. We further propose VTEdit-QA, a reference-aware VLM-based evaluator that assesses VTON performance from three key aspects: model consistency, cloth consistency, and overall image quality. Through this framework, we systematically evaluate eight universal editing models and compare them with seven specialized VTON models. Results show that top universal editors are competitive on conventional tasks and generalize more stably to harder scenarios, but remain challenged by complex reference configurations, particularly multi-cloth conditioning.