Liyang Chen
Scholar

Liyang Chen

Google Scholar ID: jk6jWXgAAAAJ
Tsinghua University
Multimodal Video GenerationSpeech Synthesis
Citations & Impact
All-time
Citations
194
 
H-index
9
 
i10-index
8
 
Publications
14
 
Co-authors
12
list available
Resume (English only)
Academic Achievements
  • StableDub: High-Quality and Generalized Visual Dubbing (Under Review, 2024)
  • MagicMan: Generative Novel View Synthesis of Humans with 3D-Aware Diffusion and Iterative Refinement (AAAI, 2025)
  • AdaMesh: Personalized Facial Expressions and Head Poses for Adaptive Speech-Driven 3D Facial Animation (Transaction on Multimedia, 2024)
  • VAST: Vivify Your Talking Avatar via Zero-Shot Expressive Facial Style Transfer (ICCV, 2023)
  • Transformer-S2A: Robust and Efficient Speech-to-Animation (ICASSP, 2022)
  • StableFace: Analyzing and Improving Motion Stability for Talking Face Generation (IEEE Journal of Selected Topics in Signal Processing, 2023)
  • Wavsyncswap: End-To-End Portrait-Customized Audio-Driven Talking Face Generation (ICASSP, 2023)
  • Enhancing Expressiveness in Dance Generation via Integrating Frequency and Music Style Information (ICASSP, 2024)
Research Experience
  • Involved in multiple research projects including high-quality and generalized visual dubbing framework, human-specific multi-view diffusion model, adaptive speech-driven facial animation approach, etc.
Education
  • Ph.D. student at Tsinghua University, supervised by Prof. Zhiyong Wu
Background
  • Currently a fourth-year Ph.D. student at Tsinghua University, focusing on human-centric video synthesis, 2D/3D talking face generation, and speech processing.