Garment3DGen: 3D Garment Stylization and Texture Generation

📅 2024-03-27
🏛️ arXiv.org
📈 Citations: 16
Influential: 4
📄 PDF
🤖 AI Summary
This work addresses the problem of end-to-end, fully automatic 3D garment reconstruction from a single real or synthetic image—without manual intervention. The method first leverages an image-to-3D diffusion model to generate coarse geometry as pseudo-ground-truth, then refines it via differentiable template mesh deformation, jointly optimizing geometric accuracy, surface normal consistency, and topological integrity to ensure simulation readiness. Concurrently, a global-local coherent texture mapping mechanism produces high-fidelity UV parameterizations. Compared to prior approaches, our framework achieves significant improvements in geometric precision, topological correctness, and physical simulation compatibility—demonstrated by superior quantitative metrics and visual quality over baselines. The resulting garments are directly usable for sketch-driven modeling, VR interaction, and automated draping simulation. Code is publicly available.

Technology Category

Application Category

📝 Abstract
We introduce Garment3DGen a new method to synthesize 3D garment assets from a base mesh given a single input image as guidance. Our proposed approach allows users to generate 3D textured clothes based on both real and synthetic images, such as those generated by text prompts. The generated assets can be directly draped and simulated on human bodies. We leverage the recent progress of image-to-3D diffusion methods to generate 3D garment geometries. However, since these geometries cannot be utilized directly for downstream tasks, we propose to use them as pseudo ground-truth and set up a mesh deformation optimization procedure that deforms a base template mesh to match the generated 3D target. Carefully designed losses allow the base mesh to freely deform towards the desired target, yet preserve mesh quality and topology such that they can be simulated. Finally, we generate high-fidelity texture maps that are globally and locally consistent and faithfully capture the input guidance, allowing us to render the generated 3D assets. With Garment3DGen users can generate the simulation-ready 3D garment of their choice without the need of artist intervention. We present a plethora of quantitative and qualitative comparisons on various assets and demonstrate that Garment3DGen unlocks key applications ranging from sketch-to-simulated garments or interacting with the garments in VR. Code is publicly available.
Problem

Research questions and friction points this paper is trying to address.

Generating 3D garment assets from single input images
Deforming base mesh to match generated 3D target
Creating high-fidelity texture maps for 3D rendering
Innovation

Methods, ideas, or system contributions that make the work stand out.

Leverages image-to-3D diffusion for garment geometry
Optimizes mesh deformation with quality-preserving losses
Generates high-fidelity, consistent texture maps
🔎 Similar Papers
No similar papers found.
N
N. Sarafianos
Meta Reality Labs
T
Tuur Stuyck
Meta Reality Labs
Xiaoyu Xiang
Xiaoyu Xiang
Research Scientist, Meta Reality Labs
Computer VisionDeep LearningImage Processing
Y
Yilei Li
Meta Reality Labs
J
Jovan Popovic
Meta Reality Labs
Rakesh Ranjan
Rakesh Ranjan
IIT (ISM), Dhanbad
Wireless CommunicationSignal Processing