Yancheng Wang
Scholar

Yancheng Wang

Google Scholar ID: U5QZhDAAAAAJ
Arizona State University
LLMsMulti-Modal LearningEfficient Deep LearningGraph LearningComputer Vision
Citations & Impact
All-time
Citations
286
 
H-index
5
 
i10-index
3
 
Publications
18
 
Co-authors
10
list available
Resume (English only)
Academic Achievements
  • Publications include preprints such as 'VowelPrompt: Hearing Speech Emotions from Text via Vowel-level Prosodic Augmentation' and several conference papers like TPAMI, UAI 2025, TMLR, etc.
Research Experience
  • Research Scientist Intern at Meta Superintelligence Labs (Previous GenAI) in 2025, working on LLM post-training and multi-modal learning; Applied Scientist Intern at Amazon AWS AI in 2024, focusing on automatic evaluation of LLM-powered agents in conventional scenarios; Applied Scientist Intern at Amazon Alexa AI in 2023, developing LLM-powered autonomous agents for general-purpose recommendation; Research Intern at KuaiShou U.S. R&D Center in 2021, engaged in neural architecture search for image reconstruction.
Education
  • Ph.D. at School of Computing and Augmented Intelligence, Arizona State University, supervised by Prof. Yingzhen Yang; B.S. in Mathematics and Applied Mathematics (Honors Science Program) from Qian Xuesen Honors College of Xi’an Jiaotong University.
Background
  • Research Interests: LLM Reasoning & Agents, LLM Post-Training, Multi-Modal Learning (Text, Image, and Audio), Efficient and Robust Deep Learning, Model Compression and Acceleration, Deep Generative Models, Graph Learning.