🤖 AI Summary
This work addresses three core challenges in text-to-dynamic 3D vector sketch animation generation: (1) scarcity of paired text–3D sketch data, (2) limitations of conventional 3D representations in capturing the structural abstraction inherent to sketches, and (3) difficulty in jointly ensuring temporal coherence and multi-view consistency. We propose the first training-free generative framework for this task. Our method employs dual-space distillation, tightly coupling image/video diffusion models with differentiable Bézier curve-based multi-view geometry modeling. It further introduces a structure-aware motion module and a temporal-aware prior to decouple shape preservation from articulated motion control. Experiments demonstrate that our approach significantly outperforms existing baselines in temporal realism, structural stability, fidelity, and controllability—enabling expressive, free-form 4D creative generation.
📝 Abstract
We present a novel task: text-to-3D sketch animation, which aims to bring freeform sketches to life in dynamic 3D space. Unlike prior works focused on photorealistic content generation, we target sparse, stylized, and view-consistent 3D vector sketches, a lightweight and interpretable medium well-suited for visual communication and prototyping. However, this task is very challenging: (i) no paired dataset exists for text and 3D (or 4D) sketches; (ii) sketches require structural abstraction that is difficult to model with conventional 3D representations like NeRFs or point clouds; and (iii) animating such sketches demands temporal coherence and multi-view consistency, which current pipelines do not address. Therefore, we propose 4-Doodle, the first training-free framework for generating dynamic 3D sketches from text. It leverages pretrained image and video diffusion models through a dual-space distillation scheme: one space captures multi-view-consistent geometry using differentiable Bézier curves, while the other encodes motion dynamics via temporally-aware priors. Unlike prior work (e.g., DreamFusion), which optimizes from a single view per step, our multi-view optimization ensures structural alignment and avoids view ambiguity, critical for sparse sketches. Furthermore, we introduce a structure-aware motion module that separates shape-preserving trajectories from deformation-aware changes, enabling expressive motion such as flipping, rotation, and articulated movement. Extensive experiments show that our method produces temporally realistic and structurally stable 3D sketch animations, outperforming existing baselines in both fidelity and controllability. We hope this work serves as a step toward more intuitive and accessible 4D content creation.