DiffusedWrinkles: A Diffusion-Based Model for Data-Driven Garment Animation

📅 2025-03-24
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the challenge of jointly achieving fine-grained wrinkle modeling and topology-agnostic generalization in parametric 3D garment animation generation, this paper proposes the first data-driven approach leveraging a 2D image diffusion model. Our core innovation encodes 3D garment deformations as layout-consistent offset texture maps, enabling conditional diffusion modeling directly in 2D texture space—thereby decoupling geometric deformation from mesh topology constraints. The method supports joint conditioning on pose, body shape, and garment design, facilitating both single-frame multi-solution sampling and temporally coherent animation synthesis. Experiments demonstrate that our approach generates high-fidelity, detail-rich 3D animations for garments of arbitrary topology and diverse human configurations, exhibiting strong generalization and significantly outperforming existing mesh-dependent methods.

Technology Category

Application Category

📝 Abstract
We present a data-driven method for learning to generate animations of 3D garments using a 2D image diffusion model. In contrast to existing methods, typically based on fully connected networks, graph neural networks, or generative adversarial networks, which have difficulties to cope with parametric garments with fine wrinkle detail, our approach is able to synthesize high-quality 3D animations for a wide variety of garments and body shapes, while being agnostic to the garment mesh topology. Our key idea is to represent 3D garment deformations as a 2D layout-consistent texture that encodes 3D offsets with respect to a parametric garment template. Using this representation, we encode a large dataset of garments simulated in various motions and shapes and train a novel conditional diffusion model that is able to synthesize high-quality pose-shape-and-design dependent 3D garment deformations. Since our model is generative, we can synthesize various plausible deformations for a given target pose, shape, and design. Additionally, we show that we can further condition our model using an existing garment state, which enables the generation of temporally coherent sequences.
Problem

Research questions and friction points this paper is trying to address.

Generate 3D garment animations from 2D diffusion models
Overcome limitations of existing methods in wrinkle detail
Produce diverse deformations agnostic to mesh topology
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses 2D diffusion model for 3D garment animation
Encodes 3D deformations as 2D layout-consistent texture
Generates diverse deformations with conditional diffusion
🔎 Similar Papers
No similar papers found.