Jun Ling
Scholar

Jun Ling

Google Scholar ID: XsfjhQ0AAAAJ
Shanghai Jiao Tong University
Computer VisonTalking Face SynthesisAvatar
Citations & Impact
All-time
Citations
442
 
H-index
10
 
i10-index
10
 
Publications
20
 
Co-authors
0
 
Resume (English only)
Academic Achievements
  • - Paper: 'PoseTalk: Text-and-Audio-based Pose Control and Motion Refinement for One-Shot Talking Head Generation', Preprint, 2024
  • - Paper: 'Memories Are One-to-many Mapping Alleviators in Talking Face Generation', IEEE TPAMI, 2024
  • - Paper: 'ViCoFace: Learning Disentangled Latent Motion Representations for Visual-Consistent Face Reenactment', ACM TOMM, 2024
  • - Paper: 'StableFace: Analyzing and Improving Motion Stability for Talking Face Generation', IEEE JSTSP, 2023
  • - Paper: 'Region-aware Adaptive Instance Normalization for Image Harmonization', CVPR, 2021
  • - Paper: 'Toward Fine-grained Facial Expression Manipulation', ECCV, 2020
Research Experience
  • - Research assistant at MediaLab, Shanghai Jiao Tong University
  • - Research intern at Microsoft Research Asia (MSRA) in 2021, worked with Xu Tan and Runnan Li
Education
  • - Ph.D. candidate at Shanghai Jiao Tong University, supervised by Prof. Li Song
  • - Obtained MSc degree from Shanghai Jiao Tong University in Mar. 2021, advised by Prof. Li Song and Prof. Xiao Gu
  • - BSc from the University of Science and Technology of China
Background
  • Research Interests: Computer vision and image processing. Current research focus: Visual content creation, such as talking head synthesis, face animation, and articulated human generation.
Miscellany
  • Open and happy to share and collaborate in related fields
Co-authors
0 total
Co-authors: 0 (list not available)