GarmentPainter: Efficient 3D Garment Texture Synthesis with Character-Guided Diffusion Model

πŸ“… 2026-03-09
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
This work addresses the challenges of generating high-fidelity, 3D-consistent garment textures, which stem from structural complexity and the need for global coherence. The authors propose GarmentPainter, a framework that leverages UV position maps as 3D structural guidance within UV space, combined with reference character images, to enable high-quality, 3D-aware texture synthesis via a diffusion model. A novel type selection module is introduced to support fine-grained control without requiring image-to-mesh alignment. Furthermore, multi-source guidance signals are spatially aligned and directly injected into the diffusion model’s input, eliminating the need for architectural modifications to the underlying UNet. Experiments demonstrate that the proposed method outperforms existing approaches in terms of visual fidelity, 3D consistency, and computational efficiency.

Technology Category

Application Category

πŸ“ Abstract
Generating high-fidelity, 3D-consistent garment textures remains a challenging problem due to the inherent complexities of garment structures and the stringent requirement for detailed, globally consistent texture synthesis. Existing approaches either rely on 2D-based diffusion models, which inherently struggle with 3D consistency, require expensive multi-step optimization or depend on strict spatial alignment between 2D reference images and 3D meshes, which limits their flexibility and scalability. In this work, we introduce GarmentPainter, a simple yet efficient framework for synthesizing high-quality, 3D-aware garment textures in UV space. Our method leverages a UV position map as the 3D structural guidance, ensuring texture consistency across the garment surface during texture generation. To enhance control and adaptability, we introduce a type selection module, enabling fine-grained texture generation for specific garment components based on a character reference image, without requiring alignment between the reference image and the 3D mesh. GarmentPainter efficiently integrates all guidance signals into the input of a diffusion model in a spatially aligned manner, without modifying the underlying UNet architecture. Extensive experiments demonstrate that GarmentPainter achieves state-of-the-art performance in terms of visual fidelity, 3D consistency, and computational efficiency, outperforming existing methods in both qualitative and quantitative evaluations.
Problem

Research questions and friction points this paper is trying to address.

3D garment texture
texture synthesis
3D consistency
diffusion model
UV space
Innovation

Methods, ideas, or system contributions that make the work stand out.

3D-aware texture synthesis
UV space diffusion
character-guided generation
garment component selection
alignment-free reference
πŸ”Ž Similar Papers