🤖 AI Summary
Existing diffusion models suffer from poor alignment, limited generalization, and deployment challenges in text- or image-guided clothing-centric human generation. To address these issues, we propose Anything-Dressing—a lightweight encoder with only 83.4M parameters—that enhances fine-grained clothing prompts via adaptive attention fusion, LoRA-based fine-tuning, and integration with a pre-trained multimodal large language model (MLLM). Our method achieves strong cross-clothing and cross-style generalization, offers plug-and-play compatibility with community control plugins (e.g., ControlNet), and effectively reconciles the trade-off between texture inconsistency in lightweight models and the high computational cost and poor generalization of full-parameter fine-tuning. Extensive experiments on high-resolution (768×512) benchmarks and real-world images demonstrate state-of-the-art performance, establishing a new benchmark for clothing-human co-generation.
📝 Abstract
Diffusion models for garment-centric human generation from text or image prompts have garnered emerging attention for their great application potential. However, existing methods often face a dilemma: lightweight approaches, such as adapters, are prone to generate inconsistent textures; while finetune-based methods involve high training costs and struggle to maintain the generalization capabilities of pretrained diffusion models, limiting their performance across diverse scenarios. To address these challenges, we propose DreamFit, which incorporates a lightweight Anything-Dressing Encoder specifically tailored for the garment-centric human generation. DreamFit has three key advantages: (1) extbf{Lightweight training}: with the proposed adaptive attention and LoRA modules, DreamFit significantly minimizes the model complexity to 83.4M trainable parameters. (2) extbf{Anything-Dressing}: Our model generalizes surprisingly well to a wide range of (non-)garments, creative styles, and prompt instructions, consistently delivering high-quality results across diverse scenarios. (3) extbf{Plug-and-play}: DreamFit is engineered for smooth integration with any community control plugins for diffusion models, ensuring easy compatibility and minimizing adoption barriers. To further enhance generation quality, DreamFit leverages pretrained large multi-modal models (LMMs) to enrich the prompt with fine-grained garment descriptions, thereby reducing the prompt gap between training and inference. We conduct comprehensive experiments on both $768 imes 512$ high-resolution benchmarks and in-the-wild images. DreamFit surpasses all existing methods, highlighting its state-of-the-art capabilities of garment-centric human generation.