4-Doodle: Text to 3D Sketches that Move!

📅 2025-10-29
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses three core challenges in text-to-dynamic 3D vector sketch animation generation: (1) scarcity of paired text–3D sketch data, (2) limitations of conventional 3D representations in capturing the structural abstraction inherent to sketches, and (3) difficulty in jointly ensuring temporal coherence and multi-view consistency. We propose the first training-free generative framework for this task. Our method employs dual-space distillation, tightly coupling image/video diffusion models with differentiable Bézier curve-based multi-view geometry modeling. It further introduces a structure-aware motion module and a temporal-aware prior to decouple shape preservation from articulated motion control. Experiments demonstrate that our approach significantly outperforms existing baselines in temporal realism, structural stability, fidelity, and controllability—enabling expressive, free-form 4D creative generation.

Technology Category

Application Category

📝 Abstract
We present a novel task: text-to-3D sketch animation, which aims to bring freeform sketches to life in dynamic 3D space. Unlike prior works focused on photorealistic content generation, we target sparse, stylized, and view-consistent 3D vector sketches, a lightweight and interpretable medium well-suited for visual communication and prototyping. However, this task is very challenging: (i) no paired dataset exists for text and 3D (or 4D) sketches; (ii) sketches require structural abstraction that is difficult to model with conventional 3D representations like NeRFs or point clouds; and (iii) animating such sketches demands temporal coherence and multi-view consistency, which current pipelines do not address. Therefore, we propose 4-Doodle, the first training-free framework for generating dynamic 3D sketches from text. It leverages pretrained image and video diffusion models through a dual-space distillation scheme: one space captures multi-view-consistent geometry using differentiable Bézier curves, while the other encodes motion dynamics via temporally-aware priors. Unlike prior work (e.g., DreamFusion), which optimizes from a single view per step, our multi-view optimization ensures structural alignment and avoids view ambiguity, critical for sparse sketches. Furthermore, we introduce a structure-aware motion module that separates shape-preserving trajectories from deformation-aware changes, enabling expressive motion such as flipping, rotation, and articulated movement. Extensive experiments show that our method produces temporally realistic and structurally stable 3D sketch animations, outperforming existing baselines in both fidelity and controllability. We hope this work serves as a step toward more intuitive and accessible 4D content creation.
Problem

Research questions and friction points this paper is trying to address.

Generating dynamic 3D vector sketches from text descriptions
Ensuring multi-view consistency and temporal coherence in animations
Overcoming lack of paired text-3D sketch training data
Innovation

Methods, ideas, or system contributions that make the work stand out.

Training-free framework for dynamic 3D sketch generation
Dual-space distillation using pretrained diffusion models
Structure-aware motion module separates trajectories from deformations
🔎 Similar Papers
No similar papers found.
H
Hao Chen
School of Artificial Intelligence, Beijing University of Posts and Telecommunications
J
Jiaqi Wang
School of Artificial Intelligence, Beijing University of Posts and Telecommunications
Yonggang Qi
Yonggang Qi
Associate Professor, Beijing University of Posts and Telecommunications
computer visionsketch-based vision learning algorithms and applications
K
Ke Li
School of Artificial Intelligence, Beijing University of Posts and Telecommunications
Kaiyue Pang
Kaiyue Pang
SketchX, CVSSP, University of Surrey
Computer VisionMachine LearningArtificial Intelligence
Yi-Zhe Song
Yi-Zhe Song
SketchX Lab, CVSSP, University of Surrey
Computer VisionComputer GraphicsMachine LearningArtificial Intelligence