๐ค AI Summary
To address the problems of facial simplification, emotional impoverishment, and semantic disconnection in speech-driven singing animation, this paper proposes a semantics-acoustic joint-driven 3D head animation generation method. Our approach introduces two key innovations: (1) the concept of โmotion captionsโ, which leverages singing chain-of-thought reasoning and acoustic-guided retrieval to produce interpretable, temporally aligned, and region-annotated motion priors; and (2) a diffusion-based framework that integrates a pre-trained large language model with multimodal singing data, formulating animation generation as a facial-region motion intensity prediction task. Experiments on our newly constructed multimodal singing dataset demonstrate that our method significantly outperforms existing approaches in visual realism, emotional fidelity, and expressive diversity. Moreover, it enables fine-grained, user-controllable facial expression editing.
๐ Abstract
Singing-driven 3D head animation is a challenging yet promising task with applications in virtual avatars, entertainment, and education. Unlike speech, singing involves richer emotional nuance, dynamic prosody, and lyric-based semantics, requiring the synthesis of fine-grained, temporally coherent facial motion. Existing speech-driven approaches often produce oversimplified, emotionally flat, and semantically inconsistent results, which are insufficient for singing animation. To address this, we propose Think2Sing, a diffusion-based framework that leverages pretrained large language models to generate semantically coherent and temporally consistent 3D head animations, conditioned on both lyrics and acoustics. A key innovation is the introduction of motion subtitles, an auxiliary semantic representation derived through a novel Singing Chain-of-Thought reasoning process combined with acoustic-guided retrieval. These subtitles contain precise timestamps and region-specific motion descriptions, serving as interpretable motion priors. We frame the task as a motion intensity prediction problem, enabling finer control over facial regions and improving the modeling of expressive motion. To support this, we create a multimodal singing dataset with synchronized video, acoustic descriptors, and motion subtitles, enabling diverse and expressive motion learning. Extensive experiments show that Think2Sing outperforms state-of-the-art methods in realism, expressiveness, and emotional fidelity, while also offering flexible, user-controllable animation editing.