VidAnimator: User-Guided Stylized 3D Character Animation from Human Videos

📅 2025-08-03
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the low efficiency, insufficient naturalness, and limited controllability in stylized 3D character animation via motion transfer, this paper proposes a monocular video-driven hybrid active motion transfer framework. Methodologically, it integrates single-view human pose estimation, semantics-aligned skeletal rigging, and motion retargeting, augmented by a dual-mode interactive module—comprising keyframe refinement and semantics-level editing—for user-guided fine-grained control. Our contributions are threefold: (1) the first end-to-end pipeline for monocular video-to-stylized-character motion transfer with editable generation; (2) significant improvements in motion naturalness and stylistic consistency; and (3) empirical superiority over state-of-the-art methods in motion fidelity, visual coherence, and editing flexibility. Robustness and practical utility are validated through three application scenarios: film production, advertising, and VR non-player characters.

Technology Category

Application Category

📝 Abstract
With captivating visual effects, stylized 3D character animation has gained widespread use in cinematic production, advertising, social media, and the potential development of virtual reality (VR) non-player characters (NPCs). However, animating stylized 3D characters often requires significant time and effort from animators. We propose a mixed-initiative framework and interactive system to enable stylized 3D characters to mimic motion in human videos. The framework takes a single-view human video and a stylized 3D character (the target character) as input, captures the motion of the video, and then transfers the motion to the target character. In addition, it involves two interaction modules for customizing the result. Accordingly, the system incorporates two authoring tools that empower users with intuitive modification. A questionnaire study offers tangible evidence of the framework's capability of generating natural stylized 3D character animations similar to the motion in the video. Additionally, three case studies demonstrate the utility of our approach in creating diverse results.
Problem

Research questions and friction points this paper is trying to address.

Automating stylized 3D character animation from human videos
Reducing animator effort in motion transfer to 3D characters
Enabling user customization for stylized animation results
Innovation

Methods, ideas, or system contributions that make the work stand out.

Mixed-initiative framework for 3D animation
Motion transfer from human video
Interactive customization tools for users
🔎 Similar Papers
No similar papers found.
Xinwu Ye
Xinwu Ye
The University of Hong Kong
AIDDLLMsbioinformatics
J
Jun-Hsiang Yao
School of Data Science, Fudan University, Shanghai, China
J
Jielin Feng
Fudan University, China
S
Shuhong Mei
School of Data Science, Fudan University, Shanghai, China
Xingyu Lan
Xingyu Lan
Assistant Professor, School of Journalism, Fudan University
Data StorytellingVisualization DesignHuman-AI CommunicationUser experience
S
Siming Chen
School of Data Science, Fudan University, Shanghai, China