From Air to Wear: Personalized 3D Digital Fashion with AR/VR Immersive 3D Sketching

📅 2025-05-15
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the challenge of enabling non-expert users to autonomously create high-fidelity 3D clothing in AR/VR environments, this paper introduces the first sketch-driven 3D garment generation framework. Methodologically, we construct KO3DClothes—the first paired dataset of 3D garments and hand-drawn sketches—and design a shared latent-space sketch encoder that jointly models sketch-to-3D geometry and texture mapping. We further propose an adaptive curriculum learning strategy to enhance the stability and fidelity of conditional diffusion-based generation. The framework supports immersive, freehand sketching interaction directly within AR/VR. Quantitative and qualitative evaluations demonstrate substantial improvements in generation quality and usability over state-of-the-art methods. User studies confirm that novice users can produce high-fidelity, personalized digital garments within minutes, effectively lowering the barrier to professional 3D modeling.

Technology Category

Application Category

📝 Abstract
In the era of immersive consumer electronics, such as AR/VR headsets and smart devices, people increasingly seek ways to express their identity through virtual fashion. However, existing 3D garment design tools remain inaccessible to everyday users due to steep technical barriers and limited data. In this work, we introduce a 3D sketch-driven 3D garment generation framework that empowers ordinary users - even those without design experience - to create high-quality digital clothing through simple 3D sketches in AR/VR environments. By combining a conditional diffusion model, a sketch encoder trained in a shared latent space, and an adaptive curriculum learning strategy, our system interprets imprecise, free-hand input and produces realistic, personalized garments. To address the scarcity of training data, we also introduce KO3DClothes, a new dataset of paired 3D garments and user-created sketches. Extensive experiments and user studies confirm that our method significantly outperforms existing baselines in both fidelity and usability, demonstrating its promise for democratized fashion design on next-generation consumer platforms.
Problem

Research questions and friction points this paper is trying to address.

Democratizing 3D garment design for non-experts
Overcoming technical barriers in AR/VR fashion creation
Addressing data scarcity for 3D sketch-to-garment generation
Innovation

Methods, ideas, or system contributions that make the work stand out.

3D sketch-driven garment generation framework
Conditional diffusion model with sketch encoder
KO3DClothes dataset for training data
🔎 Similar Papers
No similar papers found.
Y
Ying Zang
School of Information Engineering, Huzhou University
Y
Yuanqi Hu
School of Information Engineering, Huzhou University
X
Xinyu Chen
School of Information Engineering, Huzhou University
Y
Yuxia Xu
School of Information Engineering, Huzzhou University
S
Suhui Wang
School of Information Engineering, Huzhou University
C
Chunan Yu
School of Information Engineering, Huzhou University
Lanyun Zhu
Lanyun Zhu
NTU, CityUHK, SUTD, BUAA
Multimodal LearningComputer VisionResource-efficient LearningLarge Vision-Language Model
Deyi Ji
Deyi Ji
Tencent; USTC Ph.D.
Multimodal LLMComputer VisionNLP
X
Xin Xu
KOKONI, Moxin (Huzhou) Technology Co., LTD.
Tianrun Chen
Tianrun Chen
Zhejiang University
Computer Vision3D ReconstructionComputational ImagingLarge Vision-Language Model