GarmentX: Autoregressive Parametric Representations for High-Fidelity 3D Garment Generation

📅 2025-04-29
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Traditional single-image garment reconstruction methods directly predict 2D pattern outlines and connectivity, often yielding self-intersecting or physically infeasible structures. To address this, we propose the first parametric framework for wearable 3D garment generation. Our method (1) introduces a GarmentCode-compatible, editable parametric representation encoding topology, geometry, and sewing constraints; (2) employs masked autoregressive modeling to explicitly enforce topological validity and geometric consistency across pattern pieces; and (3) constructs GarmentX—a large-scale parametric-image paired dataset (378K samples)—alongside a scalable synthetic data generation pipeline. Experiments demonstrate state-of-the-art performance in geometric fidelity and image alignment. Generated garments are guaranteed self-intersection-free, compatible with physics-based simulation, and support intuitive parametric editing. Our approach significantly improves diversity, physical plausibility, and editability of 3D garments.

Technology Category

Application Category

📝 Abstract
This work presents GarmentX, a novel framework for generating diverse, high-fidelity, and wearable 3D garments from a single input image. Traditional garment reconstruction methods directly predict 2D pattern edges and their connectivity, an overly unconstrained approach that often leads to severe self-intersections and physically implausible garment structures. In contrast, GarmentX introduces a structured and editable parametric representation compatible with GarmentCode, ensuring that the decoded sewing patterns always form valid, simulation-ready 3D garments while allowing for intuitive modifications of garment shape and style. To achieve this, we employ a masked autoregressive model that sequentially predicts garment parameters, leveraging autoregressive modeling for structured generation while mitigating inconsistencies in direct pattern prediction. Additionally, we introduce GarmentX dataset, a large-scale dataset of 378,682 garment parameter-image pairs, constructed through an automatic data generation pipeline that synthesizes diverse and high-quality garment images conditioned on parametric garment representations. Through integrating our method with GarmentX dataset, we achieve state-of-the-art performance in geometric fidelity and input image alignment, significantly outperforming prior approaches. We will release GarmentX dataset upon publication.
Problem

Research questions and friction points this paper is trying to address.

Generating high-fidelity 3D garments from single images
Overcoming self-intersections in traditional garment reconstruction
Creating editable parametric representations for valid sewing patterns
Innovation

Methods, ideas, or system contributions that make the work stand out.

Autoregressive model for sequential garment parameter prediction
Editable parametric representation ensuring valid garment structures
Large-scale dataset with automatic garment-image synthesis
🔎 Similar Papers
No similar papers found.
J
Jingfeng Guo
South China University of Technology
Jinnan Chen
Jinnan Chen
National University of Singapore
Computer VisionGenerative Models
Weikai Chen
Weikai Chen
Principal Research Scientist, Tencent America
3D AIGC3D VisionComputer graphicsVLM
Z
Zhenyu Sun
South China University of Technology
Lanjiong Li
Lanjiong Li
The Hong Kong University of Science and Technology (Guangzhou))
Generative Model
B
Baozhu Zhao
South China University of Technology
Lingting Zhu
Lingting Zhu
The University of Hong Kong
Generative ModelsComputer Vision
X
Xin Wang
LIGHTSPEED
Q
Qi Liu
South China University of Technology