NGD: Neural Gradient Based Deformation for Monocular Garment Reconstruction

📅 2025-08-25
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Monocular video-driven dynamic clothing reconstruction suffers from geometric detail loss and deformation artifacts: implicit volumetric methods struggle to capture high-frequency wrinkles, while template-based displacement approaches often introduce mesh distortions. This paper proposes a neural gradient-driven explicit mesh deformation framework that replaces vertex displacement with differentiable geometric gradients to eliminate deformation artifacts. We design an adaptive remeshing strategy to accurately capture dynamic surface details—such as flowing skirts—and jointly optimize dynamic texture mapping with differentiable rendering to achieve frame-wise high-fidelity recovery of illumination, shadows, and fabric textures. Our method end-to-end integrates the strengths of implicit modeling and explicit deformation. It significantly outperforms state-of-the-art approaches across multiple benchmarks, achieving a 2.1 dB PSNR improvement in geometric detail fidelity, alongside substantial gains in visual realism and motion consistency.

Technology Category

Application Category

📝 Abstract
Dynamic garment reconstruction from monocular video is an important yet challenging task due to the complex dynamics and unconstrained nature of the garments. Recent advancements in neural rendering have enabled high-quality geometric reconstruction with image/video supervision. However, implicit representation methods that use volume rendering often provide smooth geometry and fail to model high-frequency details. While template reconstruction methods model explicit geometry, they use vertex displacement for deformation, which results in artifacts. Addressing these limitations, we propose NGD, a Neural Gradient-based Deformation method to reconstruct dynamically evolving textured garments from monocular videos. Additionally, we propose a novel adaptive remeshing strategy for modelling dynamically evolving surfaces like wrinkles and pleats of the skirt, leading to high-quality reconstruction. Finally, we learn dynamic texture maps to capture per-frame lighting and shadow effects. We provide extensive qualitative and quantitative evaluations to demonstrate significant improvements over existing SOTA methods and provide high-quality garment reconstructions.
Problem

Research questions and friction points this paper is trying to address.

Reconstructing dynamic garments from monocular videos
Modeling high-frequency details like wrinkles and pleats
Capturing dynamic lighting and shadow effects
Innovation

Methods, ideas, or system contributions that make the work stand out.

Neural Gradient-based Deformation for garment reconstruction
Adaptive remeshing strategy for dynamic surface details
Dynamic texture maps for lighting and shadow effects
🔎 Similar Papers
No similar papers found.
S
Soham Dasgupta
Indian Institute of Technology Jodhpur
Shanthika Naik
Shanthika Naik
PhD, LIRIS, CNRS Lyon
Computer GraphicsGarment simulationDeep learning
P
Preet Savalia
Indian Institute of Technology Jodhpur
S
Sujay Kumar Ingle
Indian Institute of Technology Jodhpur
A
Avinash Sharma
Indian Institute of Technology Jodhpur