🤖 AI Summary
This paper addresses the challenge of synthesizing high-fidelity fashion images conditioned jointly on local sketch inputs and textual descriptions. We propose a diffusion-based multi-condition image synthesis framework featuring modular paired-center representations and a stepwise fusion strategy, which leverages attention mechanisms to harmonize sketch-guided local structural priors with multi-granularity text—encompassing both global semantics and fine-grained attributes. Our contributions are threefold: (1) We introduce Sketchy, the first large-scale fashion dataset comprising paired sketches and multiple descriptive texts per garment; (2) Our method achieves state-of-the-art performance across both global metrics (FID, CLIP Score) and local metrics (LPIPS, part segmentation accuracy); (3) A user study confirms its superior design flexibility and visual fidelity compared to existing approaches.
📝 Abstract
Fashion design is a complex creative process that blends visual and textual expressions. Designers convey ideas through sketches, which define spatial structure and design elements, and textual descriptions, capturing material, texture, and stylistic details. In this paper, we present LOcalized Text and Sketch for fashion image generation (LOTS), an approach for compositional sketch-text based generation of complete fashion outlooks. LOTS leverages a global description with paired localized sketch + text information for conditioning and introduces a novel step-based merging strategy for diffusion adaptation. First, a Modularized Pair-Centric representation encodes sketches and text into a shared latent space while preserving independent localized features; then, a Diffusion Pair Guidance phase integrates both local and global conditioning via attention-based guidance within the diffusion model's multi-step denoising process. To validate our method, we build on Fashionpedia to release Sketchy, the first fashion dataset where multiple text-sketch pairs are provided per image. Quantitative results show LOTS achieves state-of-the-art image generation performance on both global and localized metrics, while qualitative examples and a human evaluation study highlight its unprecedented level of design customization.