Yu Meng
Scholar

Yu Meng

Google Scholar ID: S2-yZKcAAAAJ
University of Virginia
Machine LearningLanguage ModelsNatural Language Processing
Citations & Impact
All-time
Citations
4,209
 
H-index
29
 
i10-index
44
 
Publications
20
 
Co-authors
19
list available
Resume (English only)
Academic Achievements
  • Published multiple papers such as 'The Surprising Effectiveness of Negative Reinforcement in LLM Reasoning' accepted by NeurIPS 2025, ICLR 2025, etc.; Won the ACM SIGKDD 2024 Dissertation Award; Named to the Forbes 30 Under 30 2025 Asia list (Healthcare & Science).
Research Experience
  • During his Ph.D., he was a visiting researcher with the Princeton NLP Group, working with Danqi Chen. His research spans the entire LLM lifecycle, including designing better post-training algorithms to improve reasoning, factuality, preference alignment, and model-based evaluation.
Education
  • Ph.D. (2023) in Computer Science from the University of Illinois Urbana-Champaign, advised by Jiawei Han; M.S. (2019) in Computer Science from the University of Illinois Urbana-Champaign; B.S. (2017) in Computer Engineering from the University of Illinois Urbana-Champaign.
Background
  • Research Interests: Developing more capable, efficient, and aligned Large Language Models (LLMs), including training paradigms, data and inference efficiency, and the foundations of representation learning. Brief: Tenure-track Assistant Professor in the Department of Computer Science at the University of Virginia (UVA).
Miscellany
  • Looking for self-motivated PhD students and interns to join his team; Served as Area Chair or Action Editor for top conferences like ICLR, ICML, and NeurIPS.