🤖 AI Summary
This work addresses the problem of multi-layer clothed human 3D reconstruction from monocular RGB image sequences. We propose “Image Layers”, a novel paradigm that jointly models multiple clothing layers within a shared coordinate system—without requiring multi-view setups, parametric human templates, or clothing-category priors. Our method represents each garment layer using an implicit neural field and introduces collision-aware optimization coupled with boundary refinement to effectively suppress inter-layer penetration. To our knowledge, this is the first end-to-end, template-free, and category-agnostic approach for multi-layer clothing reconstruction, achieving physically plausible geometric alignment within a canonical pose space. Experiments demonstrate reconstruction accuracy and penetration suppression competitive with category-specific models, while supporting diverse garment styles and significantly reducing capture and editing overhead.
📝 Abstract
The reconstruction of multi-layer 3D garments typically requires expensive multi-view capture setups and specialized 3D editing efforts. To support the creation of life-like clothed human avatars, we introduce ReMu for reconstructing multi-layer clothed humans in a new setup, Image Layers, which captures a subject wearing different layers of clothing with a single RGB camera. To reconstruct physically plausible multi-layer 3D garments, a unified 3D representation is necessary to model these garments in a layered manner. Thus, we first reconstruct and align each garment layer in a shared coordinate system defined by the canonical body pose. Afterwards, we introduce a collision-aware optimization process to address interpenetration and further refine the garment boundaries leveraging implicit neural fields. It is worth noting that our method is template-free and category-agnostic, which enables the reconstruction of 3D garments in diverse clothing styles. Through our experiments, we show that our method reconstructs nearly penetration-free 3D clothed humans and achieves competitive performance compared to category-specific methods. Project page: https://eth-ait.github.io/ReMu/