GarmageNet: A Dataset and Scalable Representation for Generic Garment Modeling

📅 2025-04-02
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the scarcity of high-quality data and the challenges of representing non-watertight, multi-layer garment geometry in high-fidelity clothing modeling, this paper introduces Garmage—the first neural, CG-friendly dual-domain representation jointly encoding 2D structured pattern images and 3D neural implicit geometry. Garmage unifies multi-layer garment geometry and seam topology maps within a single framework. Based on it, we propose GarmageNet, a generative framework enabling text- or pattern-guided, body-adaptive, and editable multi-layer garment synthesis, augmented with a vertex-level seam topology reconstruction algorithm. Our method achieves, for the first time, end-to-end compatibility between 2D image algorithms and industrial 3D simulation pipelines (e.g., Houdini, Blender, Omniverse). We further release the first large-scale, industrial-grade, high-fidelity garment dataset—featuring dense vertex correspondences, seam pattern annotations, and a standardized conversion pipeline—establishing a new benchmark for generative clothing modeling and physics-based simulation.

Technology Category

Application Category

📝 Abstract
High-fidelity garment modeling remains challenging due to the lack of large-scale, high-quality datasets and efficient representations capable of handling non-watertight, multi-layer geometries. In this work, we introduce Garmage, a neural-network-and-CG-friendly garment representation that seamlessly encodes the accurate geometry and sewing pattern of complex multi-layered garments as a structured set of per-panel geometry images. As a dual-2D-3D representation, Garmage achieves an unprecedented integration of 2D image-based algorithms with 3D modeling workflows, enabling high fidelity, non-watertight, multi-layered garment geometries with direct compatibility for industrial-grade simulations.Built upon this representation, we present GarmageNet, a novel generation framework capable of producing detailed multi-layered garments with body-conforming initial geometries and intricate sewing patterns, based on user prompts or existing in-the-wild sewing patterns. Furthermore, we introduce a robust stitching algorithm that recovers per-vertex stitches, ensuring seamless integration into flexible simulation pipelines for downstream editing of sewing patterns, material properties, and dynamic simulations. Finally, we release an industrial-standard, large-scale, high-fidelity garment dataset featuring detailed annotations, vertex-wise correspondences, and a robust pipeline for converting unstructured production sewing patterns into GarmageNet standard structural assets, paving the way for large-scale, industrial-grade garment generation systems.
Problem

Research questions and friction points this paper is trying to address.

Lack of large-scale high-quality garment datasets
Inefficient representations for non-watertight multi-layer geometries
Challenges in integrating 2D and 3D garment modeling workflows
Innovation

Methods, ideas, or system contributions that make the work stand out.

Neural-network-friendly garment representation Garmage
Dual-2D-3D representation integrating image and modeling
Robust stitching algorithm for seamless simulation integration
🔎 Similar Papers
No similar papers found.