Annual Meeting of the Association for Computational Linguistics · 2024
Cited
6
Resume (English only)
Academic Achievements
Published multiple papers, including 'DeepSeek-R1 incentivizes reasoning in LLMs through reinforcement learning' (Nature, 2025), 'DeepSeek-Prover-V1.5: Harnessing Proof Assistant Feedback for Reinforcement Learning and Monte-Carlo Tree Search' (ICLR, 2025), 'DeepSeek-Prover: Advancing Theorem Proving in LLMs through Large-Scale Synthetic Data' (NeurIPS, MATH-AI workshop, 2024), and more.
Research Experience
Involved in several research projects including DeepSeekMath, DeepSeek-R1, DeepSeek-Prover, ToRA, and Critic, which focus on enhancing the reasoning capabilities of large language models, math pre-training, proof search, and tool-integrated reasoning.
Education
PhD in Computer Science from Tsinghua University, advised by Prof. Minlie Huang.
Background
A Research Scientist at DeepSeek working on LLM reasoning. Interested in building self-improving systems that can accomplish increasingly complex tasks by leveraging a variety of skills, such as tool use and reasoning. Named one of MIT Tech Review’s 35 Innovators Under 35.