Yong Lin
Scholar

Yong Lin

Google Scholar ID: M4g0ZvMAAAAJ
Princeton University
Forma Math ReasoningLLM Post-training
Citations & Impact
All-time
Citations
1,997
 
H-index
23
 
i10-index
28
 
Publications
20
 
Co-authors
19
list available
Resume (English only)
Academic Achievements
  • The paper R-tuning won the Outstanding Paper Award at NAACL 2024; involved in multiple projects such as Goedel-Prover and its V2 version, ranking first on the PutnamBench leaderboard; the SelfMoA method also ranked first on the AlpacaEval 2.0 leaderboard.
Research Experience
  • Currently a Postdoctoral Fellow at Princeton Language and Intelligence, collaborating with Chi Jin, Sanjeev Arora, and Danqi Chen. Prior to his PhD, he worked as a Senior Machine Learning Engineer at Alibaba from 2017 to 2021, where he developed industrial-level applications and gained insights into the challenges of deep models in an industrial setting.
Education
  • PhD under the supervision of Professor Tong Zhang; recipient of Apple AI/ML PhD fellowship (2023) and Hong Kong PhD fellowship (2020). Specific degree, university, and time details not provided.
Background
  • Research interests include formal math reasoning and post-training of large language models (LLMs). Specifically, enabling LLMs to reason using verifiable languages like LEAN, and training LLMs for automated theorem proving through the Goedel-Prover project. Also focused on enhancing traits such as helpfulness, harmlessness, and honesty in LLMs.
Miscellany
  • Served as an Area Chair for ACL ARR in February 2025.