HoloGarment: 360° Novel View Synthesis of In-the-Wild Garments

📅 2025-09-15
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenging problem of 360° novel-view synthesis (NVS) of clothing under severe occlusion, complex poses, and dynamic cloth deformation in real-world scenarios. We propose a video-driven implicit neural modeling framework that constructs a shared clothing embedding space bridging real-world videos and synthetic 3D data. Our method introduces a pose-invariant “garment atlas” representation and employs implicit training with cross-domain embedding learning, enabling high-fidelity 360° reconstruction from merely 1–3 input frames or a short video clip. The approach significantly improves multi-view geometric and textural consistency, achieving state-of-the-art performance in wrinkle modeling, occlusion recovery, and pose generalization. Quantitative and qualitative evaluations demonstrate superior detail fidelity and visual coherence compared to existing methods.

Technology Category

Application Category

📝 Abstract
Novel view synthesis (NVS) of in-the-wild garments is a challenging task due significant occlusions, complex human poses, and cloth deformations. Prior methods rely on synthetic 3D training data consisting of mostly unoccluded and static objects, leading to poor generalization on real-world clothing. In this paper, we propose HoloGarment (Hologram-Garment), a method that takes 1-3 images or a continuous video of a person wearing a garment and generates 360° novel views of the garment in a canonical pose. Our key insight is to bridge the domain gap between real and synthetic data with a novel implicit training paradigm leveraging a combination of large-scale real video data and small-scale synthetic 3D data to optimize a shared garment embedding space. During inference, the shared embedding space further enables dynamic video-to-360° NVS through the construction of a garment "atlas" representation by finetuning a garment embedding on a specific real-world video. The atlas captures garment-specific geometry and texture across all viewpoints, independent of body pose or motion. Extensive experiments show that HoloGarment achieves state-of-the-art performance on NVS of in-the-wild garments from images and videos. Notably, our method robustly handles challenging real-world artifacts -- such as wrinkling, pose variation, and occlusion -- while maintaining photorealism, view consistency, fine texture details, and accurate geometry. Visit our project page for additional results: https://johannakarras.github.io/HoloGarment
Problem

Research questions and friction points this paper is trying to address.

Novel view synthesis of real-world garments with occlusions
Bridging domain gap between synthetic and real clothing data
Generating 360-degree views from limited input images/videos
Innovation

Methods, ideas, or system contributions that make the work stand out.

Leverages real video and synthetic data
Creates shared implicit garment embedding space
Constructs canonical atlas for view synthesis
🔎 Similar Papers
No similar papers found.