Sketch2PoseNet: Efficient and Generalized Sketch to 3D Human Pose Prediction

📅 2025-10-30
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Sketch-based 3D human pose estimation suffers from poor generalization and heavy reliance on hand-crafted rules due to the high abstraction, severe scale distortion, and scarcity of large-scale, real-world annotated data. Method: We propose a “learning from synthesis” paradigm: (1) introducing SKEP-120K—the first large-scale synthetic sketch-to-3D-pose dataset; (2) designing an end-to-end differentiable framework integrating diffusion-based sketch generation, a 2D pose detector, generative priors, and a feed-forward network; and (3) incorporating multi-scale geometric consistency loss and self-contact constraint loss to enhance structural plausibility. Results: Our method achieves state-of-the-art performance across accuracy, inference speed, and cross-style robustness. Comprehensive evaluations—including quantitative metrics, qualitative visualization, and human subjective assessment—demonstrate consistent superiority. This work establishes an efficient and reliable paradigm for sketch-driven 3D modeling in animation and film production.

Technology Category

Application Category

📝 Abstract
3D human pose estimation from sketches has broad applications in computer animation and film production. Unlike traditional human pose estimation, this task presents unique challenges due to the abstract and disproportionate nature of sketches. Previous sketch-to-pose methods, constrained by the lack of large-scale sketch-3D pose annotations, primarily relied on optimization with heuristic rules-an approach that is both time-consuming and limited in generalizability. To address these challenges, we propose a novel approach leveraging a "learn from synthesis" strategy. First, a diffusion model is trained to synthesize sketch images from 2D poses projected from 3D human poses, mimicking disproportionate human structures in sketches. This process enables the creation of a synthetic dataset, SKEP-120K, consisting of 120k accurate sketch-3D pose annotation pairs across various sketch styles. Building on this synthetic dataset, we introduce an end-to-end data-driven framework for estimating human poses and shapes from diverse sketch styles. Our framework combines existing 2D pose detectors and generative diffusion priors for sketch feature extraction with a feed-forward neural network for efficient 2D pose estimation. Multiple heuristic loss functions are incorporated to guarantee geometric coherence between the derived 3D poses and the detected 2D poses while preserving accurate self-contacts. Qualitative, quantitative, and subjective evaluations collectively show that our model substantially surpasses previous ones in both estimation accuracy and speed for sketch-to-pose tasks.
Problem

Research questions and friction points this paper is trying to address.

Estimating 3D human poses from abstract sketches efficiently
Overcoming limited sketch-3D pose annotations for generalization
Ensuring geometric coherence and accuracy in pose reconstruction
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses diffusion model for sketch synthesis
Creates synthetic dataset with sketch-pose pairs
Combines pose detectors with neural network
🔎 Similar Papers
No similar papers found.