Spatio-Temporal Garment Reconstruction Using Diffusion Mapping via Pattern Coordinates

📅 2026-02-27
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
High-fidelity 3D reconstruction of loose garments from monocular images or videos remains challenging. This work proposes a unified framework for both static and dynamic clothing reconstruction, leveraging an Implicit Sewing Pattern (ISP) in UV space to encode garment shape priors. By integrating a spatiotemporal diffusion mechanism with analytical projection constraints, the method establishes precise correspondences among pixels, UV coordinates, and 3D geometry. At inference time, a guidance strategy is introduced to enforce cross-frame consistency and plausibly complete occluded regions. Experiments demonstrate that the approach generalizes robustly to real-world imagery, significantly outperforming existing methods in reconstructing fine geometric details. Furthermore, the reconstructed garments support downstream applications such as texture editing, garment retargeting, and animation.

Technology Category

Application Category

📝 Abstract
Reconstructing 3D clothed humans from monocular images and videos is a fundamental problem with applications in virtual try-on, avatar creation, and mixed reality. Despite significant progress in human body recovery, accurately reconstructing garment geometry, particularly for loose-fitting clothing, remains an open challenge. We propose a unified framework for high-fidelity 3D garment reconstruction from both single images and video sequences. Our approach combines Implicit Sewing Patterns (ISP) with a generative diffusion model to learn expressive garment shape priors in 2D UV space. Leveraging these priors, we introduce a mapping model that establishes correspondences between image pixels, UV pattern coordinates, and 3D geometry, enabling accurate and detailed garment reconstruction from single images. We further extend this formulation to dynamic reconstruction by introducing a spatio-temporal diffusion scheme with test-time guidance to enforce long-range temporal consistency. We also develop analytic projection-based constraints that preserve image-aligned geometry in visible regions while enforcing coherent completion in occluded areas over time. Although trained exclusively on synthetically simulated cloth data, our method generalizes well to real-world imagery and consistently outperforms existing approaches on both tight- and loose-fitting garments. The reconstructed garments preserve fine geometric detail while exhibiting realistic dynamic motion, supporting downstream applications such as texture editing, garment retargeting, and animation.
Problem

Research questions and friction points this paper is trying to address.

3D garment reconstruction
loose-fitting clothing
monocular images
spatio-temporal consistency
garment geometry
Innovation

Methods, ideas, or system contributions that make the work stand out.

diffusion model
implicit sewing patterns
spatio-temporal reconstruction
UV mapping
garment geometry
🔎 Similar Papers
No similar papers found.