SemanticGarment: Semantic-Controlled Generation and Editing of 3D Gaussian Garments

📅 2025-09-21
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing 3D garment generation methods suffer from limited multi-view consistency, geometric/texture fidelity, and editing flexibility. To address these challenges, we propose a semantic-driven 3D garment generation and editing framework built upon 3D Gaussian Splatting. Our method integrates structured human priors with fine-grained garment semantics—eliminating the need for predefined mesh templates or manual rigging—while introducing a self-occlusion-aware optimization strategy that effectively mitigates holes and artifacts in single-image reconstruction. It supports high-fidelity global and local editing driven by text or image prompts, enabling multi-view-consistent, wear-aware 3D garment synthesis. Experiments demonstrate superior performance over state-of-the-art approaches in generation quality, editing efficiency, and interactivity. Our work establishes a new paradigm for photorealistic, customizable digital garment generation.

Technology Category

Application Category

📝 Abstract
3D digital garment generation and editing play a pivotal role in fashion design, virtual try-on, and gaming. Traditional methods struggle to meet the growing demand due to technical complexity and high resource costs. Learning-based approaches offer faster, more diverse garment synthesis based on specific requirements and reduce human efforts and time costs. However, they still face challenges such as inconsistent multi-view geometry or textures and heavy reliance on detailed garment topology and manual rigging. We propose SemanticGarment, a 3D Gaussian-based method that realizes high-fidelity 3D garment generation from text or image prompts and supports semantic-based interactive editing for flexible user customization. To ensure multi-view consistency and garment fitting, we propose to leverage structural human priors for the generative model by introducing a 3D semantic clothing model, which initializes the geometry structure and lays the groundwork for view-consistent garment generation and editing. Without the need to regenerate or rely on existing mesh templates, our approach allows for rapid and diverse modifications to existing Gaussians, either globally or within a local region. To address the artifacts caused by self-occlusion for garment reconstruction based on single image, we develop a self-occlusion optimization strategy to mitigate holes and artifacts that arise when directly animating self-occluded garments. Extensive experiments are conducted to demonstrate our superior performance in 3D garment generation and editing.
Problem

Research questions and friction points this paper is trying to address.

Achieving multi-view consistent 3D garment generation from text or image prompts
Enabling semantic-based interactive editing without regenerating mesh templates
Addressing self-occlusion artifacts in single-image garment reconstruction
Innovation

Methods, ideas, or system contributions that make the work stand out.

3D Gaussian-based method for garment generation
Leverages structural human priors for consistency
Self-occlusion optimization for single-image reconstruction
🔎 Similar Papers
No similar papers found.