Chao Wang
Scholar

Chao Wang

Google Scholar ID: ULsXmAsAAAAJ
Research Engineer at Meta Reality Labs
Virtual Digital HumanMulti-Modal Motion SynthesisTalking Head GenerationStylized Avatar
Citations & Impact
All-time
Citations
175
 
H-index
6
 
i10-index
5
 
Publications
14
 
Co-authors
17
list available
Publications
14 items
Browse publications on Google Scholar (top-right) ↗
Resume (English only)
Academic Achievements
  • Seamless Interaction: Dyadic Audiovisual Motion Modeling and Large-Scale Dataset (Arxiv Preprint, 2025/07)
  • X-Dancer: Expressive Music to Human Dance Video Generation (ICCV, 2025)
  • X-Dyna: Expressive Dynamic Human Image Animation (CVPR, 2025)
  • MagicTalk: Implicit and Explicit Correlation Learning for Diffusion-based Emotional Talking Face Generation (CVM, 2025)
  • X-Portrait: Expressive Portrait Animation with Hierarchical Motion Attention (SIGGRAPH 2024)
  • DR2: Disentangled Recurrent Representation Learning for Data-efficient Speech Video Synthesis (WACV, 2024)
  • Using Augmented Face Images to Improve Facial Recognition Tasks (CHI Workshop, 2022)
  • Study of detecting behavioral signatures within DeepFake videos (Arxiv Preprint, 2022)
  • Efficient Plane-Based Optimization of Geometry and Texture for Indoor RGB-D Reconstruction (CVPR Workshops, 2019)
Background
  • Current research interests are mainly in the areas of computer vision and graphics, specifically in talking head generation, human video generation, multi-modal motion synthesis, and generative modeling.